Stringy Things

Notes on String Theory: Conformal Field Theory – Ward Identities and Noether’s Theorem

Introduction

We now turn our attention to an introduction to Ward identities, which extends the ideas of Noether’s theorem in quantum field theory. Polchinski notes (p.41), `A continuous symmetry in field theory implies the existence of a conserved current (Noether’s theorem) and also Ward identities, which constrain the operator products of the current’.

In this post we want to derive a particular form of the Ward identity, coinciding with Section 2.3 in Polchinski’s textbook. And we shall proceed with the following discussion by emphasising again the perspective we have been building for some time, which goes all the way back to the definition of local operators. Moreover, Ward identities are in fact operator equations generally satisfied by the correlation functions, which, of course, are tied to the symmetry of the theory. So we take this as a starting point. As Polchinski comments, symmetries of the string worldsheet play a very important role in string theory. A large part of our study here is to consider some general field theory and derive similarly general consequences of symmetry in that field theory, extracting what we may learn as a result. It turns out that what we learn is how, among other things, we may derive Ward identities through the functional integral of the correlation functions, utilising the method of a change of variables.

1. Ward Identities and Noether Currents

We start by taking the path integral. Now, suppose we have a general field theory. For an arbitrary infinitesimal transformation of the form {\phi_{\alpha}^{\prime}(\sigma) = \phi_{\alpha}(\sigma) + \epsilon \cdot \delta\phi_{a}(\sigma)}, where {\epsilon} is the infinitesimal parameter,

\displaystyle \int [d\phi^{\prime}]e^{-S[\phi^{\prime}]} = \int [d\phi]e^{-S[\phi]} \ \ (1)

What we have done is considered the symmetry {\phi_{\alpha}^{\prime}(\sigma) = \phi_{\alpha}(\sigma) + \epsilon \cdot \delta\phi_{a}(\sigma)} of our general field theory and found that both the measure and the action are left invariant (1). They are invariant because what we have is in fact an exact or continuous symmetry of our field theory. A continuous symmetry implies the existence of a conserved current, which, of course, infers Noether’s Theorem and also Ward identities. So, from this basic premise, we want to consider another transformation of the form {\phi_{a} \rightarrow \phi_{a}^{\prime}(\sigma) = \phi_{a}(\sigma) +\rho(\sigma)\delta\phi_{a}(\sigma)}, where {\rho(\sigma)} is an arbitrary function. Consider the following comments for clarity: in this change of variables what we are doing is basically promoting {\epsilon} to be {\epsilon(\sigma)}. In that this transformation is not a symmetry of the theory, because one will notice that the action and the measure are no longer invariant, what we find is that to leading order of {\epsilon} the variation of the path integral actually becomes proportional to the gradient {\partial_{a} \rho}. Notice,

\displaystyle \int [d\phi^{\prime}]e^{-S[\phi^{\prime}]} = \int [d\phi]e^{-S[\phi]}[1 + \frac{i\epsilon}{2\pi} \int d^{d}\sigma \sqrt{-g} J^{a}(\sigma) \partial_{a}\rho(\sigma) + \mathcal{O}(\epsilon^2)] \ \ (2)

Where {J^{a}(\sigma)} is a local function that comes from the variation of the measure and the action. Indeed, it should be emphasised, both the measure and the action are local (p.41). The picture one should have in their mind is the same we have been building for some time: namely, we are working in some localised region within which all the operators we’re considering reside. This is one of the big ideas at this point in our study of CFTs.

Now, the idea from (2) is that, whilst we have technically changed the integrand, the partition function has actually remained the same. Why? In the change of variables, we have simply redefined the dummy integration variable {\phi}. This invariance of the path integral under change of variables gives the quantum version of Noether’s theorem {\frac{\epsilon}{2\pi i} \int d^{d}\sigma\sqrt{g} \rho(\sigma) \langle \nabla_{a}J^{a}(\sigma)... \rangle = 0}, where `…’ are arbitrary additional insertions outside of the small local region in which {\rho} is taken to be zero. This is precisely why Polchinski comments that, when we take `the function {\rho} to be nonzero only in a small region’, it follows we may consider `a path integral with general insertions `…’ outside this region’ (p.41). In other words, as {\rho} is taken to be nonzero in a small region, insertions outside this region are invariant under the change of variables.

From this clever logic, where we have {\nabla_{a}J^{a} = 0} as an operator statement (p.42), we want to proceed to derive the Ward identity. It follows that, as motivated by Polchinski, given (2) we now want to insert new operators into the path integral, noting {\rho(\sigma)} has finite support. Therefore, we may write,

\displaystyle  \int [d\phi^{\prime}] e^{-S[\phi^{\prime}]} A^{\prime}(\sigma_{0}) = \int [d\phi]e^{-S}[A(\sigma_{0}) + \delta A + \frac{i\epsilon}{2\pi} \int d^2\sigma\sqrt{-g} J^{a}(\sigma)A(\sigma_{0})\partial_{a}\rho + \mathcal{O}(\epsilon)^2] \ \ (3)

Where, again,

\displaystyle  \phi_{a} \rightarrow \phi^{\prime}_{a} = \phi_{a} + \epsilon\cdot \rho(\sigma) \cdot \delta \phi_{a}(\sigma)

And, now,

\displaystyle  A(\sigma) \rightarrow A^{\prime}(\sigma) = A(\sigma) + \delta(A) \ \ (4)

Then, we may use {\int d\phi^{\prime}e^{-S^{\prime}}A^{\prime} = \int d\phi e^{-S}A} to show,

\displaystyle  0 = -\delta A(\sigma_{0}) - \frac{i\epsilon}{2\pi} \int d^2 \sqrt{-g} J^{a}\partial_{a}\rho

\displaystyle 0 = - \delta A(\sigma_{0}) + \frac{i\epsilon}{2\pi} \int d^2 \sqrt{-g} \nabla_{a}J^{a}\rho \ \ (5)

Notice, at this point, that while we now have an integral equation, we can write it without the integral. This implies the following,

\displaystyle \nabla_{a}J^{a}A(\sigma_{0}) = \frac{1}{\sqrt{-g}}\delta^{d}(\sigma - \sigma_{0}) \frac{2\pi}{i\epsilon} \delta A(\sigma_{0}) + \text{total derivative} \ \ (6)

Where we have a total {\sigma}-derivative. But this statement is equivalent to, more generally,

\displaystyle \delta A(\sigma_{0}) + \frac{\epsilon}{2\pi i} \int_{R} d^{d}\sigma \sqrt{-g}\nabla_{a}J^{a}(\sigma)A(\sigma_{0}) = 0 \ \ (7)

Which is precisely the operator relation Polchinski gives in eqn. (2.3.7). In (7) above, what we have done is let {\rho(\sigma) = 1} in some region R and {0} outside that region. In the context of our present theory, the divergence theorem may then be invoked to give,

\displaystyle  \int_{R} d^2 \sigma \sqrt{-g} \nabla_{a}[J^{a}A(\sigma_{0})] = \int_{\partial R}dA n_{a}J^{a} A(\sigma_{0}) = \frac{2\pi}{i \epsilon} \delta A(\sigma_{0}) \ \ (8)

Where the area element is {dA} and {n^{a}} the outward normal. As Polchinski explains, what we have is a relation between the integral of the current around the operator and the variation of that same operator (p.42). We can see this in the structure of the above equation.

If the current is divergenceless, then the surface interior should give zero – i.e., it should vanish. One might say, more simply, there should therefore be a conservation current. But that would be prior to the insertion of the operator. In other words, we are assuming the symmetry transformation acts on the operator.

The next thing we want to do is convert to holomorphic and antiholomorphic coordinates, instead of {(\sigma)} coordinates. To do this we may rewrite (8) in flat 2-dimensions as,

\displaystyle  \oint_{\partial R} (J_{z}dz - \bar{J}_{z}d\bar{z})A(z_{0}, \bar{z}_{0}) = \frac{2\pi}{\epsilon}\delta A(z_{0}, \bar{z}_{0}) \ \ (9)

In general, it is difficult to evaluate this integral exactly. We can evaluate it in cases, for example, where the LHS simplifies. It simplifies when, {J_z} is holomorphic, meaning {\partial J_{z} = 0}. Therefore, also, {J_{\bar{z}}} is antiholomorphic, meaning {\partial J_{\bar{z}} = 0}. In these cases we use residue theorem,

\displaystyle  2\pi i [Res J_{z}A(z_{0}, \bar{z}_{0}) + Res J_{\bar{z}}A(z_{0}, \bar{z}_{0})] = \frac{2\pi}{\epsilon}\delta A(z_{0}, \bar{z}_{0}) \ \ (10)

Another way to put it is that the integral (9) selects and gathers the residues in the OPE. And what we find is the Ward identity,

\displaystyle  Res_{z \rightarrow z_{0}} J_{z} A(z_{0}, \bar{z}_{0}) + \bar{Res}_{\bar{z} \rightarrow \bar{z}_{0}} J_{\bar{z}}A(z_{0}, \bar{z}_{0}) = \frac{1}{i\epsilon}\delta A(z_{0}, \bar{z}_{0}) \ \ (11)

Where {\text{Res}} and {\bar{\text{Res}}} are the coefficients of {(z - z_{0})^{-1}} and {(\bar{z} - \bar{z}_{0})^{-1}}.

Now, it should be stated that this Ward identity is extremely powerful. It tells us the variation of any operator in terms of currents. One will see it in action quite a bit in bosonic string theory. Moving forward, we will also be using all the tools that we so far defined or studied. For example, we will eventually look at the OPEs to extract {\frac{1}{z}} like dependence and {\frac{1}{z} - z_{0}} like dependence. And in this way we will learn how operators transform.

All of this is to say, once we find out what is our conformal symmetry group, we will see there is a very close relation between OPEs in the CFT and the singular path of the transformations of the operators. And this will lead us to some rather deep insights.

It should be mentioned, again, that from these introductory notes we will go on to compute numerous detailed examples. For now, the focus is very much on introducing key concepts and familiarising ourselves with some of the deeper ideas in relation to stringy CFTS.

References

Joseph Polchinski. (2005). ‘String Theory: An Introduction to the Bosonic String’, Vol. 1.

Standard
Stringy Things

Notes on String Theory – Further Introduction to Operator Product Expansions

1. Generalising the Formula for OPEs

In the last post we continued a review of Chapter 2 in Polchinski, focusing on building understanding of conformal field theories from the perspective of local operator insertions. We finally also arrived at the basic formula for operator product expansions (OPEs). What follows in this post is a continuation of that discussion. That is to say, the following review will also necessarily reference equations in the previous entry. To avoid confusion, equation numbers from the last post will be explicitly stated.

Recall that, in an introduction to the basic formula for OPEs, it was mentioned that because it is an operator statement this means it holds inside a general expectation value. It follows that the operator equation of the form that we considered can have additional operator insertions. This implies that we may write the formula for OPEs in a more general way,

\displaystyle  \langle \mathcal{O}_{i}(z, \bar{z})\mathcal{O}_{j}(z^{\prime}, \bar{z}^{\prime}) ... \rangle = \sum_{k} C_{ij}^{k}(z - z^{\prime}, \bar{z} - \bar{z}^{\prime}) \langle \mathcal{O}_{k}(z^{\prime}, \bar{z}^{\prime}) ... \rangle \ \ (1)

Where ‘…’ again denotes additional insertions and is often left implicit. One can also work out quite simply the equivalent description in the path integral formalism for {n-1} fields.

1.1. OPEs – Generalise for an Infinite Set of Operators

There are a number of other caveats and subtleties about OPEs that we have not yet explored. It will be our aim to do so in this section by reviewing the remaining contents of section 2.2 in Polchinski’s textbook, before progressing toward more advanced topics that will then aid in our understanding of stringy CFTs and the procedure for how to compute OPEs.

Moreover, at this point in Polchinski’s introduction to OPEs, a number of results and definitions are given which may not make complete sense until later. This is because there are a number of key interrelated concepts that have not yet been formally introduced, such as radial ordering, Wick’s theorem, conformal invariance, and the necessary mode expansions that we must consider. These are important conceptual tools in establishing a wider understanding of CFTs and how we may think of OPEs in string theory. So what follows in this section may be considered more in the way of definition, introducing some ideas that relate to OPEs as we work toward more advanced topics that will clarify and enrich some of these ideas.

For instance, let us recall that in the last entry we discussed a normal ordered product that was defined in such a way that it satisfies the naive equation of motion [equation (17) from previous post]. What it is telling us is how the operator product is a harmonic function of {(z_{1}, \bar{z}_{1})}. This statement already offers a hint of what is to come both in this section and other future parts of our study on CFTs, particularly when we more explicitly discuss Wick’s theorem and mode expansions in relation to computing OPEs. For now, we may maintain an introductory tone and say that this statement leads us to an important insight early in Polchinski’s discussion in Section 2.2 of his textbook: notably that from the theory of complex variables a harmonic function may be decomposed locally as the sum of holomorphic and antiholomorphic functions. To begin to explain what this means, and to explain Polchinski’s discussion on pp.37-38 let us consider more deeply (17) from the last post. We can think of it this way,

\displaystyle  \bar{\partial}_{1} [\partial_{1} :X^{\mu}(z_{1}, \bar{z}_{1})X^{\nu}(z_{2}, \bar{z}_{2}):] = 0

\displaystyle \bar{\partial}_{1} [:\partial_{1} X^{\mu}(z_{1}, \bar{z}_{1})X^{\nu}(z_{2}, \bar{z}_{2}):] = 0 \ \ (2)

The point of (2) is to show that we now have a holomorphic derivative inside the normal ordering. But notice also that this holomorphic derivative will get annihilated by the antiholomorphic derivative acting on it. In other words, by the equation of motion mixed {\partial \bar{\partial}} derivatives vanish. This is telling us something we may perhaps already know or suspect, namely as we continue to think in terms of operators {:\partial_{1} X^{\mu}(z_{1}, \bar{z}_{1})X^{\nu}(z_{2}, \bar{z}_{2}):} is in fact a holomorphic function. Now, as Polchinski explains, from the theory of complex analysis it is within the rules that we can Taylor expand such holomorphic (and antiholomorphic) functions. This use of Taylor expansion may be considered one of the first tools in understanding how to compute OPEs. Consider, for example, only the holormorphic case. When we proceed with Taylor expansion in {z_{12}} it is implied that we have nonsingularity as {z_{1} \rightarrow z_{2}} and we obtain the following infinite series,

\displaystyle  :\partial_{1 \xi} X^{\mu}(z_{1} + \xi, \bar{z}_{1} + \xi)X^{\nu}(z_{2}, \bar{z}_{2}): = \sum_{k=1}^{\infty} \frac{\xi^{k}}{k!} :X^{\nu} \partial^{k}X^{\mu}: \ \ (3)

Where {\xi = z_{12}}. We can rewrite (3) as follows, including also the antiholomorphic series,

\displaystyle  = \sum_{k=1}^{\infty} [\frac{z_{12}^{k}}{k!} :X^{\nu}\partial^{k}X^{\mu}(z_{2}, \bar{z}_{2}): + \frac{\bar{z}_{12}^{k}}{k!} :X^{\nu}\bar{\partial}^{k}X^{\mu}(z_{2}, \bar{z}_{2}):] \ \ (4)

Which is now written only as a function of {z_{2}}. What this is telling us is that if we have some normal ordered product, we may write more generally for this product,

\displaystyle  :X^{\mu}(z_{1}, \bar{z}_{1})X^{\nu}(z_{2}, \bar{z}_{2}):

\displaystyle = :X^{\mu}(z_{2}, \bar{z}_{2})X^{\nu}(z_{2}, \bar{z}_{2}): + \sum_{k=1}^{\infty} [\frac{z_{12}^{k}}{k!} :X^{\nu}\partial^{k}X^{\mu}: + \frac{\bar{z}_{12}^{k}}{k!} :X^{\nu}\bar{\partial}^{k}X^{\mu}:] \ \ (5)

This is exactly the result that Polchinski describes in equation (2.2.4), with the exception that we have simplified the equation by dropping the {\alpha^{\prime}} term. Keeping the {\alpha^{\prime}} term explicit we arrive precisely at Polchinski’s equation,

\displaystyle  :X^{\mu}(z_{1}, \bar{z}_{1})X^{\nu}(z_{2}, \bar{z}_{2}):

\displaystyle = - \frac{\alpha^{\prime}}{2}\eta^{\mu \nu} \ln \mid z_{12} \mid^{2} + :X^{\mu}(z_{2}, \bar{z}_{2})X^{\nu}(z_{2}, \bar{z}_{2}) + \sum_{k=1}^{\infty} [\frac{z_{12}^{k}}{k!} :X^{\nu}\partial^{k}X^{\mu}: + \frac{\bar{z}_{12}^{k}}{k!} :X^{\nu}\partial^{k}X^{\mu}:] \ \ (6)

In which {- \frac{\alpha^{\prime}}{2}\eta^{\mu \nu} \ln \mid z_{12} \mid^{2}} is the regular part of the OPE that one may remember from the two-point function {\langle X^{\mu}(z, \bar{z})X^{\nu}(z^{\prime}, \bar{z}^{\prime}) \rangle}. Again, this is something we will become more familiar with as we progress. Furthermore, notice in general that (6) looks very much like an OPE as given in (1). In fact, it will become increasingly clear, especially toward the end of our present study, that we may think of this as the free field OPE hence the inclusion of the regular piece. Later, we will show explicitly the computation to achieve this result. In the meantime, since it is simply given in Polchinski’s textbook, it has also been stated here with addition of a few more comments as follows.

Note that like its equation of motion, (6) is an operator statement. Secondly, as previously alluded, OPEs in quantum field theory are very much like the analogue of Taylor expansions in calculus. When Taylor expanding some general function {\mathcal{G}(z_{1}, \bar{z}_{1}; z_{2}, \bar{z}_{2}) = :X^{\mu}(z_{1}, \bar{z}_{1})X^{\nu}(z_{2}, \bar{z}_{2}):} as above, note that one will obtain terms of the form {\partial^{k}:X^{\mu}(z_{1}, \bar{z}_{1})X^{\nu}(z_{2}, \bar{z}_{2}):} in which the derivative is outside the normal ordering as opposed to inside the normal ordering. But differentiation and normal ordering commute, which can be proven using some basic identities of functional derivatives, hence the structure of the normal ordering in the OPE (6). Also, for any arbitrary expectation value that involves some product {X^{\mu}(z_{1}, \bar{z}_{1})X^{\nu}(z_{2}, \bar{z}_{2})} multiplied by a number of fields at other points, we have been building (and will continue to build) the intuition to understand exactly why the OPE describes the behaviour for when {z_{1} \rightarrow z_{2}} as an infinite series. In the case of (6), as we deepen our study of CFTs we will come to understand more clearly why it has `a radius of convergence in any given expectation value which is equal to the distance to the nearest other insertion in the path integral’ and why `The operator product is harmonic except at the positions of operators’ (p.38).

Although how we arrive at (6) may not yet make complete sense, the key idea at this point in Polchinski’s discussion is simply that we have a product of two operators and we have described this product as an infinite sum of some coefficients {C_{k}} of some basis operators {A_{k}}. As asymptotic expansions, we will come to write OPEs up to nonsingular terms.

1.2. Subtractions and Cross-contractions

To conclude a review of Section 2.2 in Polchinski, let us consider another example where we have an arbitrary number of fields. As we discussed earlier, the sum then runs over all of the different ways we might choose pairs of fields from the product. We then replace each pair with the expectation value as mentioned in the description of the definition (16) in the last post – i.e., what we have also termed to be the regular part of the OPE. So, if for instance we have three fields, the computation generally takes the following form,

\displaystyle  :X^{\mu_{1}}(z_{1}, \bar{z}_{1})X^{\mu_{2}}(z_{2}, \bar{z}_{2})X^{\mu_{3}}(z_{3}, \bar{z}_{3}):

\displaystyle =X^{\mu_{1}}(z_{1}, \bar{z}_{1})X^{\mu_{2}}(z_{2}, \bar{z}_{2})X^{\mu_{3}}(z_{3}, \bar{z}_{3}) + (\frac{\alpha^{\prime}}{2} \eta^{\mu_{1} \mu_{2}} \ln \mid z_{12} \mid^{2} X^{\mu_{3}}(z_{3}, \bar{z}_{3}) + 2 \ \text{permutations}) \ \ (7)

Now, consider again (16) from the previous entry. It can now be seen how we may write this definition in a more compact and general way. Consider, for instance, the arbitrary functional {\mathcal{F} = \mathcal{F}[\partial X^{\mu_{1} ... \mu_{n}}]}. The terms in brackets represent a combination of an arbitrary number of fields. If, as before, we Taylor expand and make this expression an expansion of polynomials of {X}, it follows that we may then write the normal ordering for each monomial. This leads directly to the equation (2.2.7) in Polchinski,

\displaystyle :\mathcal{F}: = \exp (\frac{\alpha^{\prime}}{4} \int d^{2}z_{1}d^{2}z_{2} \ln \mid z_{12} \mid^{2} \frac{\delta}{\delta X^{\mu}(z_{1}, \bar{z}_{1})} \frac{\delta}{\delta X_{\mu}(z_{2}, \bar{z}_{2})}) \mathcal{F} \ \ (8)

Where {\mathcal{F}} is any functional of {X}. It can be shown that (8) is equivalent to (16) from the previous post. Again, this may not yet make complete sense. But for now notice that there is a double derivative in the exponent. This double derivative contracts each pair of fields. What this means is that, every time we compute the expansion we will effectively kill two {X} terms. Instead of these {X} terms, we then insert {\ln \mid z_{12} \mid^{2}} which is, of course, the subtraction. Now, reversely, if we act with the inverse exponential, we obtain the opposite of a sum of subtractions in the form of a sum of contractions,

\displaystyle  \mathcal{F} = \exp (-\frac{\alpha^{\prime}}{4} \int d^{2}z_{1}d^{2}z_{2} \ln \mid z_{12} \mid^{2} \frac{\delta}{\delta X^{\mu}()z_{1}, \bar{z}_{1}} \frac{\delta}{\delta X_{\mu}(z_{2}, \bar{z}_{2})}) :\mathcal{F}:

\displaystyle = :\mathcal{F}: + \ \text{contractions} \ \ (9)

As it will become increasingly clear when we compute some detailed examples, this means we are now summing over all of the ways of choosing pairs of fields from {:\mathcal{F}:} instead of {\mathcal{F}}. We then replace each pair with the contraction {-\frac{1}{2} \alpha^{\prime}\eta^{\mu_{i} \mu_{j}} \ln \mid z_{ij} \mid^{2}}. It follows that for any pair of operators, we can generate the respective OPE

\displaystyle :\mathcal{F}: :\mathcal{G}: = :\mathcal{F} \mathcal{G}: + \sum \ \text{cross-contractions} \ \ (10)

What (10) is saying is that we are now summing over all of contracting pairs with one field in {\mathcal{F}} and one field in {\mathcal{G}}, where, again, {\mathcal{F}} and {\mathcal{G}} are arbitrary functionals of {X}. It is this construction of the cross-contractions that enables the following formal expression,

\displaystyle : \mathcal{F}: :\mathcal{G}: = \exp (-\frac{\alpha^{\prime}}{2} \int d^{2}z_{1}d^{2}z_{2} \ln \mid z_{12} \mid^{2} \frac{\delta}{\delta X_{F}^{\mu}(z_{1}, \bar{z}_{1})} \frac{\delta}{\delta X_{G \mu}(z_{2}, \bar{z}_{2})}) : \mathcal{F} \mathcal{G}: \ \ (11)

In which the entire operation now acts on the normal ordering {: \mathcal{F} \mathcal{G}:}.

This concludes the opening discussion on OPEs in Polchinski’s textbook, from which he goes on to consider two examples of computing normal ordering (p.40) before focusing on the important study of Ward identities and Noether’s theorem. It will prove beneficial to review in the future the computation of the two examples that Polchinski offers (see the Appendix of this chapter). In the meantime, it may aid one’s understanding if we instead pause and first explore other concepts integral to stringy CFTs and their OPEs. This will enable us to introduce more notation and more deeply explicate mathematical procedure. Taking such an approach has its obvious advantages, but it also has its disadvantages. The way in which Chapter 2 is structured in Polchinski’s textbook means that, in a few instances, it will be required that we advance our study of CFTs to include a number of other key concepts before making better sense of what we have already discussed, particular in why OPEs have the structure that they do and how we may think about their computational procedure in a more exemplified way. So at this point we bracket the definitions given above to discuss other related topics, before ultimately returning specifically to the subject of OPEs and computing a number of different examples step by step.

References

Joseph Polchinski. (2005). ‘String Theory: An Introduction to the Bosonic String’, Vol. 1.

Standard
Stringy Things

Notes on String Theory: Conformal Field Theory – Local Operators, the String Propagator, and Operator Product Expansions

1. Local Operators

In the last entry we introduced a theory of free massless scalars in flat 2-dimensions (i.e., a free X-CFT). From this we also introduced new terms and established notation relevant to our ongoing study of CFTs in string theory (Chapter 2 in Polchinski). What we now want to do is proceed with a review of a number of interrelated topics at the heart of stringy CFTs: namely, local operators, techniques with path integrals, string propagators, and finally operator product expansions. Each of these topics has a number of parts, and so we shall need to work piece by piece and then stitch everything together.

To begin, we note that in string perturbation theory, one of the main objects of interest is the expectation value of the path integral of a product of local operators (Polchinski, p.36). This interest is our entry point, and it represents a primary theme for much of the following discussion. So our first step should be to define what we mean by local operators. These objects may also be described as fields; however, in the context of CFTs, the notion of a field carries a different meaning than, for instance, the definition of a field in quantum field theory. In our case, a field may be viewed generally as a local expression, which may be the generic field {\phi} that enters the path integral in QFT, or as a composite operator {e^{i\phi}} or as a derivative {\partial^{n}\phi} (Tong, p.69). These are all different types of fields or local operators in the CFT dictionary.

With a definition of local operators in mind, we opened the discussion by mentioning the expectation value of the path integral as a primary object of interest. Let us now consider some general expectation value. Consider, for instance, {\mathcal{A}_{i}} that is some basis for a set of local operators. We may write the general expectation value as follows,

\displaystyle \langle \mathcal{A}_{i_{1}}(z_{1}, \bar{z}_{1}) \mathcal{A}_{i_{2}}(z_{2}, \bar{z}_{2}) ... \mathcal{A}_{i_{n}}(z_{n}, \bar{z}_{n}) \rangle \ \ (1)

If the basic idea, as mentioned, is to compute the expectation value of the path integral, a more technical or detailed description of our overarching interest is to understand the behaviour of this expectation value (1) in the limit of two operators taken to approach one another (Polchinski, p. 36). The tool that we use for such analysis is the operator product expansion (OPE). Understanding the definition of OPEs and how to compute them is one of the ultimate aims of studying stringy CFTs, important for more advanced topics that we will consider throughout the remainder of this paper. But before formally defining OPEs, it is useful to first build a deeper sense of intuition about their meaning. To do this, let as briefly review some more basics.

2. The Path Integral and Arbitrary Operator Insertions

What do we mean by path integral? And how do we understand this idea of local operator insertions? Additionally, how do we construct the important operator equations required to build a picture of OPEs? Polchinski offers several valuable contributions to a definition of the path integral, including a lengthy treatment in the Appendix of Volume 1. For our purposes, we might first emphasise the QFT view of the path integral as an integral over fields,

\displaystyle Z = \int [dX]e^{-S} \ \ (2)

We may describe this as a partition function. Now, if what we want to know is the expectation value given some operator, this implies that we want to employ the path integral representation to derive operator equations. For instance, as we read in Polchinski (p.34), given some operator we may compute,

\displaystyle \left\langle \mathcal{F}[X] \right\rangle = \int [dX]e^{-S}\mathcal{F}[X] \ \ (3)

Where {\mathcal{F}[X]} is some functional of X, typically a product of two operators, and where \langle \mathcal{F}[X] \rangle = \langle 0 \mid \mathcal{F} \mid 0 \rangle  . For multiple entries in the form,

\displaystyle  \mathcal{F}_{1}[X(z_{1}, \bar{z}_{1})] \mathcal{F}[X(z, \bar{z})] \mathcal{F}_{2}[X(z_{2}, \bar{z}_{2})]

We may write,

\displaystyle  \langle 0 \mid \mathcal{F}_{1} \mathcal{F} \mathcal{F}_{2} \mid 0 \rangle =\int [dX]e^{-S} \mathcal{F}_{1}[X(z_{1}, \bar{z}_{1})] \mathcal{F}[X(z, \bar{z})] \mathcal{F}_{2}[X(z_{2}, \bar{z}_{2})] \ \ (4)

There is a notion of time-ordering present in (4), which we will discuss later. For now, we should note that the path integral of a total derivative is always zero. This fact will prove useful in just a moment and on many other occasions in the future. As Polchinski reflects, `This is true for ordinary bosonic path integrals, which can be regarded as the limit of an infinite number of ordinary integrals, as well as for more formal path integrals as with Grassmann variables’ (Polchinski, pp. 34-35). Hence eq.(2.1.15) in Polchinski (p.35), where he considers the path integral with the inclusion of Grassmann variables,

\displaystyle  0 = \int [dX] \frac{\delta}{\delta X_{\mu}(z, \bar{z})} \exp (-S)

\displaystyle  = - \int [dX] \exp (-S) \frac{\delta S}{\delta X_{\mu} (z, \bar{z})}

\displaystyle = - \int \bigg \langle \frac{\delta S}{\delta X_{\mu} (z, \bar{z})} \bigg \rangle

\displaystyle = \frac{1}{\pi \alpha^{\prime}} \langle \partial \bar{\partial} X^{\mu}(z, \bar{z}) \rangle \ \ (5)

There is something interesting with this result. If we recall the action for the free X-CFT in the last post, remember that we found the classical EoM to be {\partial \bar{\partial} X^{\mu}(z, \bar{z}) = 0}. Notice, then, that the result (5) is the analogue statement in the quantum theory for the classical equations of motion. What is this telling us? Let us dig a bit deeper.

First, consider how the same calculation in (5) holds if we have arbitrary additional insertions `…’ in the path integral. We already considered what multiple entries in the path integral in (4). But there is a caveat: namely, these additional insertions cannot also be at {z} (something we will elaborate below). Second, in the case of multiple entries in the path integral, which implies that we may write something of the form {\int [dX] \frac{\delta}{\delta X^{\mu}(z, \bar{z})}[e^{-S}\mathcal{F}(z, \bar{z})]}, one can think of some of the insertions as preparing a state in the theory. In other words, we should note that these insertions prepare arbitrary initial and final states in the theory (Polchinski, p.35). These arbitrary initial and final states perform a similar role should we instead consider boundary conditions, except with the offered convenience that we may now write the following path integral statement,

\displaystyle \left\langle \partial\bar{\partial}X^{\mu}(z, \bar{z}) ... \right\rangle = 0 \ \ (6)

Now, if (5) is the analogous statement in the quantum theory for the classical equations of motion, look at (6). Notice, as an operator statement, it is the same as in the Hilbert space formalism,

\displaystyle \partial\bar{\partial}\hat{X}(z, \bar{z}) = 0 \ \ (7)

Polchinski describes (7) as holding for all matrix elements of the operator {\hat{X}(z, \bar{z})}, with all relations that hold (6) being operator equations (Polchinski, p.34). These two points are important. It should also be noted that (7) is Ehrenfest’s theorem, which makes a lot of sense because it is telling us something that we already know or suspect: namely, the expectation values of the operators obey the classical equations of motion. But, again, this proves true only when the additional insertions `…’ in the path integral are located away from {z}. So let us now look into this subtlety. If, for example, addition insertions cannot be coincident at {z}, then let us consider what happens when we do indeed have coincident points at {z}! It follows,

\displaystyle  0 = \int [dX] \frac{\delta}{\delta X_{\mu}(z, \bar{z})}[exp(-S)X^{\nu}(z^{\prime}, \bar{z^{\prime}})]

\displaystyle = \int d[X] exp(-S) [\eta^{\mu \nu}\delta^{2}(z - z^{\prime}, \bar{z} - \bar{z}^{\prime}) + \frac{1}{\pi\alpha^{\prime}}\partial_{z}\partial_{\bar{z}}X^{\mu}(z, \bar{z})X^{\nu}(z^{\prime}, \bar{z}^{\prime})]

\displaystyle = \eta^{\mu \nu} \langle \delta^{2}(z - z^{\prime}, \bar{z} - \bar{z}^{\prime}) \rangle + \frac{1}{\pi\alpha^{\prime}}\partial_{z}\partial_{\bar{z}}\langle X^{\mu}(z, \bar{z})X^{\nu}(z^{\prime}, \bar{z}^{\prime})\rangle \ \ (8)

Where the {\delta^{2} (z^{\prime} - z, \bar{z}^{\prime} - \bar{z})} term comes from differentiating {\frac{\delta X^{\mu}(z^{\prime}, \bar{z}^{\prime})}{\delta X^{\mu}(z, \bar{z})}} that appears in the computation. What we see in (8) is that at coincident points the classical equations of motion do not hold at the quantum level. This implies a few things. First, the good news is that we obtain our previous result that the EoM agrees as an operator statement of the ground state specifically under the conditions {z \neq z^{\prime}}. Second, the implication is clearly that with arbitrary additional insertions `…’ in the path integral, so long that these are far away from {z} and {z^{\prime}}, we may can rewrite (8) as,

\displaystyle \frac{1}{\pi \alpha^{\prime}} \partial_{z}\partial_{\bar{z}} \langle X^{\mu}(z, \bar{z}) X^{\nu}(z^{\prime}, \bar{z}^{\prime}) \ ... \ \rangle = -\eta^{\mu \nu} \langle \delta^{2}(z - z^{\prime}, \bar{z} - \bar{z}^{\prime}) \ ... \ \rangle \ \ (9)

Where the ellipses are, again, the additional fields. Importantly, we may note that the following holds as an operator equation,

\displaystyle \frac{1}{\pi \alpha^{\prime}} \partial_{z}\partial_{\bar{z}} X^{\mu}(z, \bar{z})X^{\nu}(z^{\prime},\bar{z}^{\prime}) = - \eta^{\mu \nu} \delta^{2}(z - z^{\prime}, \bar{z} - \bar{z}^{\prime}) \ \ (10)

We are going to want to solve this equation in the future, because the solution will prove useful when computing OPEs. In the meantime, what should be understood is that what we have accomplished here is that we’ve modified the EoM to take into account that there is a collision between points at {z} and {z^{\prime}}. And we have also found that this behaviour can be derived as an operator statement. The purpose and greater logic for such an exercise will become increasingly clear. Meanwhile, notice that we now have a product of operators. Although it will not be proven here, it follows that in the Hilbert space formalism this product in the path integral becomes time-ordered (Polchinski, p.36). We also see that the delta function appears when the derivatives act on the time-ordering.

To summarise, these last results signal what has already been alluded (however vaguely) about the definition of OPEs in the final paragraph of Section 1. If, moreover, the general theme is so far one of path integrals and local operator insertions, the picture we are ultimately constructing is one of such insertions inside time-ordered correlation functions. These correlation functions can then be held as operator statements.

3. Time-ordered Correlation Functions, Normal Ordering, and the String Propagator

Before formally introducing and defining OPEs, we should spend a few more moments developing the picture and building intuition. For example, when it comes to the idea of time-ordered correlation functions, we will learn that solving the operator equation (10) gives us,

\displaystyle \langle X^{\mu}(z, \bar{z})X^{\nu}(z^{\prime}, \bar{z}^{\prime}) \rangle = - \eta^{\mu \nu} \frac{\alpha^{\prime}}{2} \ln \mid z - z^{\prime} \mid^{2} \ \ (11)

The computation required to arrive at this result may not yet have much meaning and may be too forward thinking. We will come to understand it soon. What can be understood at this juncture are some of the pieces of this equation. The most important note is that (11) is the propagator of the theory of massless scalars that we have been working with in our study of CFTs (i.e., the free X-CFT). Notice, on the left-hand side of the equality, we a two-point correlation function. As it has been stated, correlation functions are time-ordered. Let us focus on this notion of time-ordering. For instance, consider a Wick expansion for {X^{\mu}(z, \bar{z})},

Where we have indicated the use of contraction notation that will be defined later. The first observation is that we have a two-point correlation function, and we have some term {T}. We also have colons on the right-hand side. For the {T} term, it indicates that the expression is time-ordered in the same way one will find in basic QFT (Polchinski, p.36). Writing {T} in full we find,

\displaystyle  T (X^{\mu} (z, \bar{z}), X^{\nu} (z^{\prime}, \bar{z}^{\prime}))

\displaystyle = X^{\mu} (z, \bar{z}) X^{\nu} (z^{\prime}, \bar{z}^{\prime}) \theta(z - z^{\prime}) + X^{\nu}(z^{\prime}, \bar{z}^{\prime})X^{\mu}(z, \bar{z})\theta(z^{\prime} - z) \ \ (13)

Now, looking again at (13), it is worth pointing out a few other things. Firstly, what we will learn in the future, particularly as we advance our discussion on CFTs, is that this time-ordering will prove very useful. Eventually we are going to want to make conformal transformations from an infinite cylinder to the complex plane, and we will learn that time-ordering on the cylinder corresponds to radially ordering on the complex plane. Reversely, we will see that radial ordering on the complex plane corresponds with time-ordering in the path integral. This is a featured point of study in Section 2.6 of Polchinski and it is something we will discuss later. Secondly, for the colons on the right-hand side, they indicate normal ordering. We saw normal ordering in the past discussion on the free string string spectrum using light-cone gauge quantisation. Notice, then, that on the far right-hand side we have a normal ordered product. The definition of normal ordered operators follows as (Polchinski, p.36),

\displaystyle :X^{\mu}(z, \bar{z}): = X^{\mu}(z, \bar{z}) \ \ (14)

And for the normal ordered product we have,

\displaystyle :X^{\mu}(z_{1}, \bar{z}_{1}), X^{\nu}(z_{2}, \bar{z}_{2}): = X^{\mu}(z_{1}, \bar{z}_{1}) X^{\nu}(z_{2}, \bar{z}_{2}) + \frac{\alpha^{\prime}}{2}\eta^{\mu \nu} \ln \mid z_{12} \mid^{2} \ \ (15)

Where {z_{ij} = z_{i} - z_{j}}. Furthermore, for arbitrary numbers of fields, the normal ordered product may be written as,

\displaystyle :X^{\mu_{1}} (z_{1}, \bar{z}_{1}) ... X^{\mu_{n}}(z_{n}, \bar{z}_{n}): = X^{\mu_{1}}(z_{1}, \bar{z}_{1}) ... X^{\mu_{n}}(z_{n}, \bar{z}_{n}) + \sum \text{subtractions} \ \ (16)

Where, for the subtractions, we sum the pairs of fields from the product and then replace each pair with its expectation value {\frac{\alpha^{\prime}}{2}\eta^{\mu \nu} \ln \mid z_{ij} \mid^{2}}. We will elaborate more on (16) later. Meanwhile, consider again the operator equation (10). If what we want to do is define a product of operators that would satisfy the classical EoM, then from (16) and using (10) we can compute,

\displaystyle \partial_{z} \partial_{\bar{z}} :X^{\mu}(z_{1}, \bar{z}_{1}) X^{\nu}(z_{2}, \bar{z}_{2}): = \partial_{z} \partial_{\bar{z}} X^{\mu}(z_{1}, \bar{z}_{1}) X^{\nu}(z_{2}, \bar{z}_{2}) + \frac{\alpha^{\prime}}{2}\eta^{\mu \nu} \partial_{z} \partial_{\bar{z}} \ln \mid z_{12} \mid^{2}

\displaystyle = - \pi \alpha^{\prime} \eta^{\mu \nu} \delta^{2}(z_{1} - z_{2}, \bar{z}_{1} - \bar{z}_{2}) + \frac{\alpha^{\prime}}{2} \eta^{\mu \nu} \partial_{z}\partial_{\bar{z}} \ln \mid z_{12} \mid^2

\displaystyle = - \pi \alpha^{\prime} \eta^{\mu \nu} \delta^{2}(z_{1} - z_{2}, \bar{z}_{1} - \bar{z}_{2}) + \pi \alpha^{\prime} \eta^{\mu \nu} \delta^{2} (z_{1} - z_{2}, \bar{z}_{1} - \bar{z}_{2}) = 0 \ \ (17)

Where, for the last line in the computation, we used the standard result,

\displaystyle  \partial \bar{\partial} \ln \mid z \mid^{2} = 2\pi \delta^{2}(z, \bar{z}) \ \ (18)

Which is derived from an application of Stokes’ theorem.

Importantly, (17) is precisely the property that Polchinski highlights in equation (2.1.23) on p.36 of his textbook. What (17) is telling us is that, on the last line, {- \pi \alpha^{\prime} \eta^{\mu \nu} \delta^{2}(z_{1} - z_{2}, \bar{z}_{1} - \bar{z}_{2})} are the quantum corrections to the classical EoM. It is also telling us that, as we want to define a product of operators that satisfy the classical EoM, we must necessarily induce normal ordering.

So what does this all mean? In order to further extend the picture being developed here, we are lead directly to a definition of OPEs.

4. Operator Product Expansions

We may now define operator product expansions. The definition follows (pp. 37-38) directly from the intuition and logic that we have so far established, notably that OPEs may be considered a direct statement about the behaviour of local operators as they approach one another. The formula for OPEs is as follows,

\displaystyle \langle \mathcal{O}_{i}(z, \bar{z})\mathcal{O}_{j}(z^{\prime}, \bar{z}^{\prime}) \rangle = \sum_{k} C_{ij}^{k}(z - z^{\prime}, \bar{z} - \bar{z}^{\prime})\langle \mathcal{O}_{k}(z^{\prime}, \bar{z}^{\prime}) \rangle \ \ (19)

Which is, again, an operator statement. This means that it also holds inside a general expectation value. Saving the general formula for OPEs until later, note that in (19) the {C_{ij}^{k}(z - z^{\prime}, \bar{z} - \bar{z}^{\prime})} should be considered as a set of functions that depend only on the separation between the two operators (i.e., there is translational invariance).

To summarise, if OPEs describe what happens when local operators approach one another other, we have already developed a sense of technical intuition for why the key idea is one of having two local operators inserted in such a way that they are situated close to one another but not at coincident points. As we have already discussed, upon insertion of local operators at {z_{1}} and {z_{2}} for example, we obtain some normal ordered product. Then, what we can do is compute their approximation by way of a string of operators at only one of the insertion points (Tong, p.69). There can be any number of operator insertions, which is of course why we have included `…’ in the general formula for the OPE (19); it denotes insertions that are not coincident at {z}. (From this point forward, the ellipse will be removed and the following statement will be implied). This leads us directly to an illustration of OPEs as provided in Polchinski’s textbook.

In figure 4.1, we see that we have a number of local operator insertions, {z_{1}} to {z_{4}}, hence what we would be computing is the expectation of 4 local operators. Given that the OPE describes the limiting behaviour of {z_{1} \rightarrow z_{2}} as a series, where the pair of operators are replaced by a single operator at {z_{2}}, one way to think about this is analogous to the Taylor series in calculus (i.e., the OPE plays a similar role in quantum field theory). In fact, the analogue of computing a Taylor series is apt, as we will see when we start computing OPEs.

Another thing to note is that the circle in the picture illustrates the radius of convergence, such that this radius is computed as the distance to the nearest other operator positioned on the circle. In CFTs, OPEs have a finite radius of convergence.

Now, from our previous discussions, and from the formal definition of OPEs, we can see quite clearly why they are always to be understood as statements which hold as operator insertions inside time-ordered correlation functions. Should one ask, ‘what are the observables in string theory?’, the answer is that we compute a set of correlation functions of local/composite operators at their insertion points. So, should we take for example the Polyakov action, {S_{P}}, and compute the correlation functions for the CFT, one motivation is to show the correlation function to be related to the scattering amplitude in 26-dimensional spacetime (in the case of the bosonic string). So, in perturbative string theory, we look at the critical theory – that is, the critical coefficients and components of the correlation function,

\displaystyle \langle A_{ij}(z_{i}\bar{z}_{j}) ... A_{ij}(z_{n}, \bar{z}_{n})) \rangle \ \ (20)

Where we are interested in the singular behaviour. Moreover, recall the definition of the normal ordered product (15). Notice that we have very interesting log behaviour. If what we want to know exactly is what will happen with the product of the two operators as {z_{1} \rightarrow z_{2}}, this implies that we have an operator singularity. As we start computing OPEs and moving forward in our study of string CFTs, it will become very clear why this singular behaviour is actually the only thing we care about.

In the next post, we will extend our discussion of OPEs. Following that, we will look to derive the Ward Identities and then turn our attention to the Virasoro algebra among other important topics in Chapter 2 of Polchinski’s textbook.

References

Joseph Polchinski. (2005). ‘String Theory: An Introduction to the Bosonic String’, Vol. 1.

David Tong. (2009). ‘String Theory’ [lecture notes].

Standard
Stringy Things

Notes on String Theory: Conformal Field Theory – Massless Scalars in Flat 2-dimensions

In past entries we familiarised ourselves very briefly with conformal transformations and the 2-dimensional conformal algebra. To progress with our study of Chapter 2 in Polchinski, we need to equip ourselves with a number of other essential tools which will assist in building toward computing operator product expansions (OPEs). In this post, we will focus on notational conventions, transforming to complex coordinates, and utilising holomorphic and antiholomorphic functions. Then, in the next post, we will focus on the path integral and operator insertions, before turning attention to the general formula for OPEs.

To start, it will be beneficial if we define a toy theory of free massless scalars in 2-dimensions. For consistency, we will use the same toy theory that Polchinski describes on p.32. (Please also note, this theory is very similar to the theory on the string WS and this is why it will be useful for us, as it will support an introductory study within an applied setting).

In the context of our toy theory, the Polyakov action takes the form,

\displaystyle  S_{P} = \frac{1}{4\pi \alpha^{\prime}} \int d^{2}\sigma [\partial_{1}X^{\mu} \partial_{1}X^{\nu} + \partial_{2}X^{\mu}\partial_{2}X^{\nu}] \ \ (1)

This is the Polyakov action with {\gamma_{ab}} being replaced by a flat Euclidean metric {\delta_{ab}} and with Wick rotation. What is the benefit of the Euclidean metric and what is meant by Wick rotation?

In general, a lot of calculation in string theory is performed on a Euclidean WS, in which case, for flat metrics, standard analytic continuation may be used to relate Euclidean and Minkowksi amplitudes. The benefit is that the Euclidean metric enables us to study ordinary geometry and to use conformal field theory on the string. But one will note that in previous constructions of the Polyakov action we used a Minkowski metric, and in past discussions we have also been using light-cone coordinates. So let’s consider transforming from a Minkowski measure to a Euclidean one. To achieve a flat Euclidean metric, the idea is simple: we use a Wick rotation to rewrite the Minkowski metric. Moreover, recall that in Euclidean space, namely the x-y plane, the infinitesimal measure is given by {ds^{2} = dx^{2} + dy^{2}}. Compare this with the Minkowski measure, which we may write in WS coordinates as {ds^{2} = -d\tau^{2} + d\sigma^{2}}. Notice that in the Euclidean picture all quantities are positive (or at least share the same sign). Now, by Wick rotation, we make a transformation on the time coordinate in the Minkowski measure such that {\tau \rightarrow -i\tau}. This means {d\tau \rightarrow -id\tau} and from this it follows {ds^{2} = - (-id\tau)^{2} + d\sigma^{2} = d\tau^{2} + d\sigma^{2}}. This is a Euclidean metric.

Hence, by Wick rotation, we are working in imaginary time signature such that the new metric in Euclidean coordinates may be written as,

\displaystyle \delta =      \begin{bmatrix}     1 & 0 \\     0 & 1 \\ \end{bmatrix}  \ \ (2)

As suggested, the goal is to end up with a Euclidean theory of massless scalars in flat 2-dimensions. Note, also, that as a result of Wick rotation,

\displaystyle  (\sigma_{0}, \sigma_{1}) \rightarrow (\sigma_{2}, \sigma_{1}) \ \ (3)

Where we define {\tau \equiv \sigma_{0}} and {\sigma \equiv \sigma_{1}}, and where {\sigma_{2} \equiv i\sigma_{0}}.

Moving forward, one should note that after Wick rotation from LC coordinates {(+, -)}, we enter into the use of complex coordinates {(z, \bar{z})}. We first observed these coordinates in the last section on the 2-dimensions conformal algebra. Further clarification may be offered. Most notably, the description of the WS is now performed using complex variables by defining these complex coordinates {(z, \bar{z}} that are, in fact, a function of the variables {(\tau, \sigma)} with which we have already grown accustomed. Hence, {z =  \tau + i\sigma} and {\bar{z} = \tau - i\sigma}. The benefit of setting up complex coordinates is that it enables us to employ holomorphic (left-moving) and antiholomorphic (right-moving) indices, where holomorphic = {z} and antiholomorphic = {\bar{z}} as also observed in our discussion on the conformal generators.

Now that our field theory has been sketched, and complex coordinates have been formally established, to understand how to transform these coordinates we must understand how to compute the derivatives. The first step is to invert the coordinates and then we will differentiate,

\displaystyle  \tau = \frac{z + \bar{z}}{2}, \ \ \sigma = \frac{z - \bar{z}}{2i}  \ \ (4)

Differentiating with respect to {z} and {\bar{z}} coordinates we obtain the following,

\displaystyle  \frac{\partial \tau}{\partial z} = \frac{\partial \tau}{\partial \bar{z}} = \frac{1}{2} \ \ (5)

And,

\displaystyle  \frac{\partial \sigma}{\partial z} = \frac{1}{2i}, \ \ \ \frac{\partial \sigma}{\partial \bar{z}} = -\frac{1}{2i} \ \ (6)

With these results we can then compute for the holomorphic coordinates,

\displaystyle  \frac{\partial}{\partial z} = \frac{\partial \tau}{\partial z}\frac{\partial}{\partial \tau} + \frac{\partial \sigma}{\partial}\frac{\partial}{\partial \sigma} = \frac{1}{2}\frac{\partial}{\partial \tau} + \frac{1}{2i}\frac{\partial}{\partial \sigma} = \frac{1}{2}(\frac{\partial}{\partial \tau} - i\frac{\partial}{\partial \sigma}) \ \ (7)

One can also repeat the same steps for the antiholomorphic case {\bar{z}},

\displaystyle  \frac{\partial}{\partial \bar{z}} = \frac{\partial \tau}{\partial \bar{z}}\frac{\partial}{\partial\tau} + \frac{\partial \sigma}{\partial \bar{z}}\frac{\partial}{\partial \sigma} = \frac{1}{2}\frac{\partial}{\partial \tau} - \frac{1}{2i}\frac{\partial}{\partial \sigma} = \frac{1}{2}(\partial_{\tau} + i\partial_{\sigma}) \ \ (8)

Hence, the shorthand notation as read in Polchinksi and which we will use from this point forward,

\displaystyle  \partial \equiv \partial_{z} = \frac{1}{2}(\partial_1 - i \partial_2) \ \ (9)

\displaystyle  \bar{\partial} \equiv \partial_{\bar{z}} = \frac{1}{2}(\partial_1 + i\partial_2) \ \ (10)

Where {\partial_zz = 1} and {\partial_{\bar{z}}z = 0}.

To continue setting things up, we must now also register that we may set {\sigma = (\sigma^{1},\sigma^{2})} and {\sigma^{z} = \sigma^{1} + i\sigma^{2}} and {\sigma^{\bar{z}} = \sigma^{1} - i\sigma^{2}}. The reason for this will become clear in a moment. For the metric, given the above {\gamma_{ab} \rightarrow \delta_{ab} \rightarrow g_{ab}},

\displaystyle g_{ab} =     \begin{bmatrix}     g_{zz} & g_{z\bar{z}} \\     g_{\bar{z}z} & g_{\bar{z}\bar{z}} \\ \end{bmatrix} (11)

From this we can also say that {\det g = \sqrt{g} = \frac{1}{2}}, which is true for Minkowski and indeed {\delta_{ab}} since we have Wick rotated. When we raise indices a factor of {2} is returned: {g^{z\bar{z}} = g^{\bar{z}z} = 2}. Lastly, the area element transforms as {d\sigma^{1}d\sigma^{2} \equiv d^{2}\sigma \equiv 2 d\sigma^{1}d\sigma^{2}}. So, we see, {d^{2}z \sqrt{g} \equiv d^{2}\sigma}.

We now need to study how the delta function transforms. Given {\int d^{2}\sigma \delta^{2}(\sigma_{1},\sigma_{2}) = \int d^{2}\sigma \delta(\sigma_{1})\delta(\sigma_{2}) = 1}, we find that in our new coordinates:

\displaystyle  \int d^{2}z \delta^{2}(z,\bar{z}) = 1 \implies \delta^{2}(z,\bar{z}) = \frac{1}{2} \delta^{2}(\sigma_{1},\sigma_{2}) \ \ (12)

We can continue establishing relevant notation by focusing on how we may rewrite the Polyakov action. In the notation that we’ve constructed we find,

\displaystyle  S_{P} = \frac{1}{2\pi \alpha^{\prime}} \int d^{2}z \partial X^{\mu} \bar{\partial}X_{\mu} \ \ (13)

Where {d^{2}z = dzd\bar{z}}. Using identities constructed throughout this post, the tools are available to see how we arrive at this simpler form of the action. The task now is to see what returns when we vary (13).

Proposition: We vary the action (13) and find the EoM to be {\partial\bar{\partial}X^{\mu} = 0}.

Proof: The string coordinate field, if not obvious, is now {X(z, \bar{z}) = X(z) + \bar{X}(\bar{z})}. It will become clear in the following discussion that we want to compute a quantity without linear dependence on {\tau}. To that end we use the derivative of the coordinate field {\partial X(z)} and {\bar{\partial}\bar{X}(\bar{z})}.

Now, when we vary the simplified action we find,

\displaystyle  0 = \frac{\delta S}{\delta X^{\mu}} = \frac{1}{2\pi \alpha^{\prime}} \int d^{2}z \partial X^{\mu}\bar{\partial}(X_{\mu} + \delta X_{\mu})

\displaystyle  = \frac{1}{2\pi \alpha^{\prime}} \int d^{2}z \partial X^{\mu} (\bar{\partial}X_{\mu} + \bar{\partial}\delta X_{\mu})

\displaystyle = \frac{1}{2\pi \alpha^{\prime}} \int d^{2}z \partial X^{\mu}\bar{\partial}X_{\mu} + \frac{1}{2\pi \alpha^{\prime}} \int d^{2}z \partial X^{\mu}(\bar{\partial}\delta X_{\mu}) \ \ (14)

Continuing with the conventional procedure, where we now integrate by parts (and for convenience discard the boundary terms), we find the EoM to be

\displaystyle \delta S = \frac{1}{2\pi \alpha^{\prime}} \int d^{2}z \partial X^{\mu}(\bar{\partial}\delta X_{\mu})

\displaystyle = - \frac{1}{2\pi \alpha^{\prime}} \int d^{2}z \partial\bar{\partial}X^{\mu} (\delta X_{\mu}) = 0

\displaystyle  \implies \partial\bar{\partial}X^{\mu} (z, \bar{z}) = 0 \ \ (15)

\Box

Using the fact that partial derivatives commute. This completes the proof. The classical solution can be solved by, or in other words it decomposes as, {X(z) + \bar{X}(z)}. And we should also note, for pedagogical purposes, that we may write the EoM as \partial (\bar{\partial} X^{\mu}) = \bar{\partial} (\partial X^{\mu}) = 0 such that

\displaystyle  \partial X^{\mu} = \partial X^{\mu}(z) \ \ \ \text{holomorphic function}

\displaystyle  \bar{\partial}X^{\mu} = \bar{\partial}X^{\mu} (\bar{z}) \ \ \ \text{antiholomorphic function}

References

Joseph Polchinski. (2005). ‘String Theory: An Introduction to the Bosonic String’, Vol. 1.

David Tong. (2009). ‘String Theory’ [lecture notes].

Standard
Stringy Things

Notes on String Theory: Conformal Group in 2-dimensions

1. Introduction: Conformal Group in 2-dimensions

Following our previous study of the d-dimensional conformal group and the generators of conformal transformations, we now turn our attention to the study of the conformal group in 2-dimensions. Although we have taken some time to considered the d-dimensional conformal algebra, it should already be clear from past discussions that our interest is particularly in 2-dimensions. To begin our study of the 2-dimensional conformal algebra, where {d = 2}, note that we’re now employing a 2-dimensional Euclidean metric such that {g_{\mu \nu} = \delta_{\mu \nu}}. The first task is to construct the generators. Moreover, it can be found when studying the conserved currents on the WS (substituting for the Euclidean metric, see the last post),

\displaystyle \partial_{\mu}\epsilon_{\nu} + \partial_{\nu}\epsilon_{\mu} = (\partial \cdot \epsilon)\delta_{\mu \nu} \ \ (1)

When we take the coordinates {(x^1, x^2)} and as we calculate for different values of {\mu} and {\nu}, the above equation reduces rather nicely:

For {\mu = \nu = 1}, we arrive at {2\partial_{1}\epsilon_{1} = \partial_{1}\epsilon_{1} + \partial_{2}\epsilon_{2} \implies \partial_{1}\epsilon_{1} = \partial_{2}\epsilon_{2}}.

For {\mu = \nu = 2}, we arrive reversely at {2\partial_{2}\epsilon_{2} = \partial_{1}\epsilon_{1} + \partial_{2}\epsilon_{2} \implies \partial_{2}\epsilon_{2} = \partial_{1}\epsilon_{1}}.

Now, for the symmetric case where {\mu = 1} and {\nu = 2} (and, equivalently by symmetry, {\mu = 2} and {\nu = 1}), we arrive {\partial_{1}\epsilon_{2} + \partial_{2}\epsilon_{1} = 0}. It follows, {\partial_{1}\epsilon_{2} = -\partial_{2}\epsilon_{1}}.

Notice, from these results, we have two distinguishable equations:

\displaystyle \partial_{1}\epsilon_{1} = \partial_{2}\epsilon_{2} \ \ (2)

\displaystyle \partial_{1}\epsilon_{2} = -\partial_{2}\epsilon_{1} \ \ (3)

If it is not obvious to the reader, it can be explicitly stated that these are nothing other than the Cauchy-Riemann equations. What this means, firstly, is that the conformal Killing equations reduce to the Cauchy-Riemann equations. Secondly, in 2-dimensions the infinitesimal conformal transformations that are of primary focus obey these equations.

Why is this notable? We know that in the theory of complex variables we’re working with analytic functions. As Polchinski explicitly communicates (p.34), the advantage here is that in working with analytical functions we can employ the coordinate convention {(z, \bar{z})}. This means, firstly, that conformal transformations correspond with holomorphic and antiholomorphic coordinate transformations. These coordinate transformations are given by,

\displaystyle z \rightarrow f(z), \ \ \ \bar{z} \rightarrow \bar{f}(\bar{z}) \ \ (4)

Following Polchinski (pp.33-34), we are working with complex coordinates {z = \sigma + i\sigma^2} and {\bar{z} = \sigma - i\sigma^2}. It is also the case that {d^{2}x = dx^{0}dx^{1} = \frac{1}{2}dzd\bar{z}}. More will be said about this in the next section. Meanwhile, we may also define in the Euclidean signature and with complex variables,

\displaystyle \epsilon^{z} = \epsilon^0 + i\epsilon^1, \ \ \bar{\epsilon}^{\bar{z}} = \epsilon^0 - i\epsilon^1 \ \ (5)

In which {\epsilon} and {\bar{\epsilon}} are infinitesimal conformal transformations. This implies that {\partial_{z}\epsilon = 0} and {\partial_{z}\bar{\epsilon} = 0}.

And so, in terms of infinitesimal conformal transformations, we may write a change of holomorphic and antiholomorphic coordinates in an infinitesimal form,

\displaystyle z \rightarrow z^{\prime} = z^{\prime} + \epsilon(z), \ \ \ \bar{z} \rightarrow \bar{z}^{\prime} = \bar{z}^{\prime} + \bar{\epsilon}(\bar{z}) \ \ (6)

2. Generators of the 2-dimensional Conformal Group

What we want to do is obtain the basis of generators that produce the algebra of infinitesimal conformal transformations. To do so, we expand {\epsilon} and {\bar{\epsilon}} in a Laurent series obtaining the result,

\displaystyle \epsilon(z) = \sum_{n \in \mathbb{Z}} \epsilon_{n} z^{n+1} \ \ (7)

And,

\displaystyle \bar{\epsilon}(\bar{z}) = \sum_{n \in \mathbb{Z}} \epsilon_{n} \bar{z}^{n+1} \ \ (8)

With the basis of generators that generate the infinitesimal conformal transformations given by,

\displaystyle l_{n} = -z^{n+1}\partial_{z} \ \ (9)

And,

\displaystyle \bar{l}_{n} = -\bar{z}^{n+1}\partial_{\bar{z}} \ \ (10)

Classically, the above generators satisfy the Virasoro algebra. Moreover, it follows that these generators form the set {\{l_{n},\bar{l}_{n}\}} and this set becomes the algebra of infinitesimal conformal transformations for {n \in \mathbb{Z}}. The algebraic structure is given by the commutation relations,

\displaystyle [l_m, l_n] = (m - n)l_{m + n} \ \ (11)

\displaystyle [\bar{l_m}, \bar{l_n}] = (m - n)\bar{l}_{m + n} \ \ (12)

\displaystyle [l_m, \bar{l_n}] = 0 \ \ (13)

Importantly, the preceding generators obey the Witt algebra (Weigand, p.68). Also important, the generators that we’ve derived come into contact with the Möbius group (Weigand, p.69). To show this, we note the special case in which {l_{0, \pm 1}} and {\bar{l}_{0, \pm 1}}. Considering an infinitesimal coordinate transformation, we find in the following cases:

* {l_{-1} = -\partial_z} generates rigid translations of the form {z^{\prime} = z - \epsilon};

* {l_{0} = z -\epsilon z} generates dilatations;

* {l_{1} = z - \epsilon z^2} generates special conformal transformations.

When we collect these we can describe globally defined conformal diffeomorphisms as stated below, which give the Möbius transformation,

\displaystyle z \rightarrow \frac{az + b}{cz + d}

Where {ad - bc = 1}. A list of other constraints can be reviewed (Weigand, p.69).

References

Joseph Polchinski. (2005). “String Theory: An Introduction to the Bosonic String“, Vol. 1.

Joshua D. Qualls. (2016). “Lectures on Conformal Field Theory” [lecture notes].

Timo Weigand. (2015/16). “Introduction to String Theory” [lecture notes].

Kevin Wray. (2009). “An Introduction to String Theory” [lecture notes].

Standard
Stringy Things

Notes on string theory: Generators of conformal transformations

1. Infinitesimal Generators of the Conformal Group

In the last post, we considered a brief introduction to conformal field theory in string theory. We also began to study the d-dimensional conformal group, as described in equations (1) and (2). What we want to do now is study the infinitesimal generators of the d-dimensional conformal group, and in doing so we will refer back to these equations.

In other words, if we assume the background is flat, such that {g_{\mu \nu} = \eta_{\mu \nu}}, the essential point of interest here concerns the infinitesimal transformation of the coordinates. Returning to (2) in the last post, infinitesimal coordinate transformations may be considered generally in the form {x^{\mu} \rightarrow x^{\prime \mu} = x^{\mu} + \epsilon^{\mu}(x) + \mathcal{O}(\epsilon^{2})}. For the scaling factor {\Omega (x)} we have {\Omega (x) = e^{\omega(x)} = 1 + \omega(x) + [...]}.

Now, the question remains: in the case of an infinitesimal transformation, what happens to the metric? It turns out that the metric is left unchanged. To consider why this is the case, we may consider (1) from the linked discussion. Moreover, if, as above, we take an infinitesimal coordinate transformation then we have

\displaystyle g_{\mu \nu}^{\prime} (x^{\mu} + \epsilon^{\mu}) = g_{\mu \nu} + (\partial_{\mu}\epsilon^{\mu} + \partial_{\nu}\epsilon^{\nu})g_{\mu \nu}

\displaystyle = g_{\mu \nu} + \partial_{\mu}\epsilon_{\nu} + \partial_{\nu}\epsilon_{\mu} \ \ (3)

However, to satisfy the condition of a conformal transformation, (3) must be equal to (1). So we equate (3) and (1),

\displaystyle \omega(x)g_{\mu \nu}(x) = g_{\mu \nu} + \partial_{\mu}\epsilon_{\nu} + \partial_{\nu}\epsilon_{\mu} \ \ (4)

Where, {\omega(x)} is just an arbitrary function denoting a very small deviation from identity. Thus, we may also write {\omega(x) = \omega(x) - 1} which then gives,

\displaystyle (\omega(x) - 1)g_{\mu \nu} = \partial_{\mu}\epsilon_{\nu} + \partial_{\nu}\epsilon_{\mu} \ \ (5)

For this to make sense, we must find some expression for the scaling term {\omega(x) - 1}. One way to proceed is to multiply both sides of (5) by {g^{\mu \nu}}. As we are working in {d} spacetime dimensions, it follows {g_{\mu \nu}g^{\mu \nu} = d}. Hence,

\displaystyle (\omega(x) - 1)g_{\mu \nu}g^{\mu \nu} = (\partial_{\mu}\epsilon_{\nu} + \partial_{\nu}\epsilon_{\mu})g^{\mu \nu}

\displaystyle (\omega(x) - 1)d = g^{\mu \nu}\partial_{\mu}\epsilon_{\nu} + g^{\mu \nu}\partial_{\nu}\epsilon_{\mu} \ \ (6)

The left-hand side of (6) is simple to manage. Focusing on the right-hand side, we raise indices and relabel. This gives us a usual factor of {2}. Hence, for the RHS of (6),

\displaystyle = \partial_{\mu}\epsilon^{\nu} + \partial_{\nu}\epsilon^{\mu}

\displaystyle = 2 \partial_{\mu}\epsilon^{\nu} \ \ (7)

Therefore, substituting (7) into the RHS of (6) we get,

\displaystyle (\omega(x) - 1)d = 2 \partial_{\mu}\epsilon^{\nu} \ \ (8)

Now, if we divide both sides by {d} and simplify, we end up with

\displaystyle (\omega(x) - 1) = \frac{2}{d} \partial_{\mu}\epsilon^{\nu} = \frac{2}{d} (\partial \cdot \epsilon) \ \ (9)

For which we may note the substitution,

\displaystyle \frac{2}{d}(\partial \cdot \epsilon) = \partial_{\mu}\epsilon^{\nu} + \partial_{\nu}\epsilon^{\mu} \ \ (10)

Where we can see that the infinitesimal conformal transformation, {\epsilon}, obeys the above equation. What is significant about this equation? It is the conformal Killing equation. And, it turns out, solutions to the above correspond to infinitesimal conformal transformations. Let us now study these solutions.

To simplify things, notice firstly that we can define {\partial_{\mu}\epsilon^{\mu} = \Box}. Taking the derivative of the left and right-hand sides of the conformal Killing equation we obtain the following,

LHS:

\displaystyle = \partial^{\mu}(\frac{2}{d}(\partial \cdot \epsilon))

RHS:

\displaystyle \partial^{\mu}\partial_{\mu}\epsilon_{\nu} + \partial^{\mu}\partial_{\nu}\epsilon_{\mu} = \Box\epsilon_{\nu} + \partial_{\nu}(\partial \cdot \epsilon)

Putting everything together, equating both sides, and then rearranging terms we find,

\displaystyle \Box\epsilon_{\nu} + (1 - \frac{2}{d})\partial_{\nu}(\partial \cdot \epsilon) = 0 \ \ (11)

It is clear that when {d = 2}, our first equation may be written as

\displaystyle \Box\epsilon_{\nu} = 0 \ \ (12)

For {d > 2}, we arrive at the following commonly cited equations that one will find in most texts:

1) {\epsilon^{\mu} = a^{\mu}} which represents a translation ({a^{\mu}} is a constant).

2) {e^{\mu} = \lambda x^{\mu}} which represents a scale transformation. Note, this corresponds to an infinitesimal Poincaré transformation.

3) {\epsilon^{\mu} = w^{\mu}_{\nu}x^{\nu}} which represents a rotation, where {w^{\mu}_{\nu}x^{\nu}} is an antisymmetric tensor. Note, this antisymmetric tensor also acts as the generator of the Lorentz group. Also note, this corresponds to an infinitesimal Poincaré transformation.

4) {\epsilon^{\mu} = b^{\mu}x^{2} - 2x^{\mu}(b \cdot x)} which represents a special conformal transformation.

From these equations, and with the inclusion of the Poincaré group, we have the collection of transformations known as the conformal group in d-dimensions. This group is isomorphic to SO(2,d).

To complete our discussion, we may note that generally we may also incorporate the following generators and thus the conformal group has the following representation:

1) {P_{\mu} = -\partial_{\mu}}, which generates translations and is from the Poincaré group.

2) {D = -ix \cdot \partial}, which generates scale transformations.

3) {J_{\mu} = i(x_{\mu}\partial_{\nu} - x_{\nu}\partial_{\mu})}, which generates rotations.

4) {K_{\mu} = i(x^2\partial_{\mu} - 2x_{\mu}(x \cdot \partial))}, which generates special conformal transformations.

This completes our review of the d-dimensional conformal group and its algebra. In the next post, we will study the conformal group in 2-dimensions.

References

Joseph Polchinski. (2005). “String Theory: An Introduction to the Bosonic String“, Vol. 1.

Timo Weigand. (2015/16). “Introduction to String Theory” [lecture notes].

Kevin Wray. (2009). “An Introduction to String Theory” [lecture notes].

Standard
Stringy Things

Notes on string theory: Introduction to Conformal Field Theory

1. Introduction

The aim of this post is to introduce the topic of Conformal Field Theories (CFTs) in string theory. In general, CFTs allow us to describe a number of systems in different areas of physics. To list one example, conformal invariance plays an important role in condensed matter physics, particularly in the context of second order phase transitions in which the critical behaviour of systems may be described. But as we are focused on the stringy case, we may motivate the study of CFTs as follows: 2-dimensional CFTs prove very important when it comes to the study of the physical dynamics of the worldsheet.

In past posts we already observed, for instance, how the internal modes along the string relate to conformal transformations. Indeed, upon fixing the worldsheet diffeomorphism plus Weyl symmetries, the result is precisely a CFT. There are many other topics that leverage the conformal symmetry of the worldsheet theory, including how we describe string-on-string interactions and how we compute scattering amplitudes. But perhaps one of the ultimate motivational factors is that, as an essential tool in perturbative string theory, CFTs enable the study of the quantum field theory of the worldsheet. There is also the added benefit that many CFTs are completely solvable.

2. Conformal Group in d-dimensions

Before we proceed with a study of conformal field theories (beginning with Chapter 2 in Polchinski), it is useful to first think generally about the conformal group and its algebra.

Formally, a CFT is a quantum field theory that is invariant under the conformal group. To give some geometric intuition, the conformal group may be described as follows: it is the set of transformations that preserve local angles but not necessarily distances. This may also be thought of as invariance under scaling, with a conformal mapping being quite simply a biholomorphic mapping.

We may give further intuition about the conformal group by revisiting a more familiar symmetry group. Recall in previous chapters a discussion about the Poincaré group. One will remember that transformations under the D-dimensional Poincaré group combine translations and Lorentz transformations. These may be thought of as symmetries of flat spacetime, such that the flat metric is left invariant.

The conformal group includes the Poincaré group, with the addition of extra spacetime symmetries. It has already been alluded, for example, that a type of conformal transformation is a scale transformation, in which we may act by zooming in and out of some region of spacetime. This extra spacetime symmetry is an act of rescaling.

More precisely, the conformal group may be thought of as the subgroup of the group of general coordinate transformations (or diffeomorphisms). Consider the following. If one has a metric {g_{ab}(x)} (which is a 2-tensor) in d-dimensional spacetime, it follows that under the change of coordinates {x \rightarrow x^{\prime}}, we have a transformation of the general form

\displaystyle g_{\mu \nu}(x) \rightarrow g^{\prime}_{\mu \nu}(x^{\prime}) = \frac{\partial x^{a}}{\partial x^{\prime \mu}}\frac{\partial x^{b}}{\partial x^{\prime \nu}} \ g_{ab} \ \ (1)

Now, let us consider some function {\Omega(x)} of the spacetime coordinates. If a conformal transformation is a change of coordinates such that the metric changes by an overall factor, then we may consider how the metric transforms as

\displaystyle g_{\mu \nu}(\sigma) \rightarrow g^{\prime}_{\mu \nu}(\sigma^{\prime}) = \Omega (\sigma)g_{ab}(\sigma) \ \ (2)

For some scaling factor {\Omega(x)}. This is a conformal transformation of the metric. Hence why there is preservation of angles but not lengths. As this particular subgroup of coordinate transformations preserve angles while distorting lengths, in studying how to construct conformally invariant theories we will learn that conformal systems do not possess definitions of scale with respect to intrinsic length, mass or energy. For these reasons one might say the working physics is somewhat constrained or confined, such that there is no induction of a reference scale in the purest sense of the word. This is also why, in our case, CFTs prove interesting: they lend themselves quite naturally to the study of massless excitations.

Now, in thinking again of the conformal transformation described in (4.2), another important and directly related point concerns a description of the metric. It is common in the literature that the background is flat. It also turns out – and this will become more apparent later on – the background metric can either be fixed or dynamical (Tong, p.61). In the future, as we work in the Polyakov formalism, the metric is dynamical and, in this case, the transformation is a diffeomorphism – not just a gauge symmetry, but a residual gauge symmetry which, we will learn, can be undone by a Weyl transformation. But before that, in simpler examples, the background metric will be fixed and so the transformation will be representative of a global symmetry. In this case of a fixed metric, the transformation should be thought of as a genuine physical symmetry, and this global symmetry contains corresponding conserved currents. The corresponding charges for these currents are the Virasoro generators, which is something we will study later on.

References

Katrin Becker, Melanie Becker, John H. Schwarz. (2006). “String Theory and M-Theory: A Modern Introduction”.

Joseph Polchinski. (2005). “String Theory: An Introduction to the Bosonic String“, Vol. 1.

David Tong. (2009). “String Theory” [lecture notes].

Kevin Wray. (2009). “An Introduction to String Theory” [lecture notes].

Standard