Mathematics

x^x^x^2017=2017 – A fun problem with a unique and sad history

[latexpage]

I was sent this problem a few weeks ago by a physicist who I work with privately on a weekly basis (on the side of my school studies). I found it to be a lot of fun to think about, and was eager to write about the solution.

Before getting into the solution, from what I understand this problem is part of a collection of difficult problems known as “coffins“. The history of the collection emerges from anti-Semitic discrimination, in which the elite mathematics school in Russia, the Mathematics Department of Moscow State University, had devised the “coffins” as a way of “actively trying to keep Jewish students (and other “undesirables”) from enrolling”.

In short, Jewish students as well as others considered “undesirable” were given this different set of killer problems on the entry oral exams, and these problems were designed to be very difficult but with indiscerningly simple solutions.

According to this summary of the history, “These problems were carefully designed to have elementary solutions (so that the Department could avoid scandals) that were nearly impossible to find. Any student who failed to answer could be easily rejected, so this system was an effective method of controlling admissions. These kinds of math problems were informally referred to as “coffins”. “Coffins” is the literal translation from Russian; in English these problems are sometimes called “killer” problems.

I recall reading about a similar practice by schools elsewhere in Europe. In terms of the “coffins” in particular, a more detailed account can be read here.

***

With the unique history noted, what about the solution?

One might instinctively think of taking logs until an identity pops out, similar, perhaps, to when setting up for composite exponential function differentiation. This was my most immediate thought. But, as far as I am aware of, it doesn’t work. And things quickly become very convoluted.

So, what next? Well, one might gain some inspiration by thinking of tetration and “power towers” (which are cool and would be fun to write about another time):

[

a^{a^{a^{a^{a}}}}

]

But the emphasis, I think, is to simplify one’s approach a little bit; because the solution itself is actually deceivingly simple.

For example, what if we just think of $x^{2017}$ and equate that to 2017? We would get x^{2017}=2017.

Building from this, a pattern starts to develop with a very cool and fun order of substitution beginning with the higher powers and then, in a manner of speaking, working our way down cancelling as we go.

The complete solution I’ve prepared in latex.

I think it is beautiful how the equation simplifies to a cascade of higher powers that cancel!

 

 

Standard
Mathematics, Physics Diary, Stringy Things

The Road to Reality

I have been reading, working through and thinking a lot about Roger Penrose’s magnum opus, The Road to Reality: A Complete Guide to the Laws of the Universe.

It is a book that I cannot speak more highly about, possible one of the best books that I have read so far in my lifetime.

My first engagement with the book was a couple of months ago when I jumped immediately into specific sections of discussion spread throughout its 34 chapters. These sections were of immediate interest to me, already having an appreciation for Professor Penrose and already having some sense about certain parts of the book’s contents. For example, I was keen to learn more about and understand Penrose’s Twistor Theory. As Penrose acknowledges in The Road to Reality, it may not have completely worked out; but Twistor Theory is being used today in a number of interesting and exciting ways. One of my own points of intrigue here concerns the emergence of the amplituhedron, a fascinating geometrical object that incites in me an incredible level of excitement and intrigue as a young theoretical particle physics student.

For this reason, one section of the book that I had to immediately search out concerns Penrose’s discussion on the geometry of twistors and the twistor description of massless fields, as this has very much been in my thoughts. Additionally, I was incredibly eager to read his descriptions of Grassmannian space (among other similar things).

Penrose’s discussion regarding wave function collapse was another section I had ear marked.  On that note, it is interesting how he seems to fall on the side of objective collapse theory – and also the many worlds interpretation – as opposed to the more textbook Copenhagen interpretation. The more I study quantum mechanics in depth and the more I learn very broadly as a theoretical physics student, on the basis of the total evidence I have absorbed thus far, I too sway toward the many-worlds interpretation. I also find the Copenhagen interpretation less than satisfactory. Where I will eventually end up in such debates is to be determined, but I was interested in reading Penrose’s take on the matter. (I am also very intrigued by the GRW interpretation, which seems tantalizing in that in seeking to avoid the quantum measurement problem, it almost actively works toward a bridge between the classical and quantum worlds. Additionally, my early interests in wave-particle duality also led me to develop an interest in the Pilot Wave/de Broglie-Bohm interpretation).

In any case, Penrose seems to view the wave function as a physical wave, which runs contrary to the established view in many university courses (that I am aware of). I’m not entirely sure about the contents of Penrose’s arguments on this level, particularly his theory of duel fields, but it is something that I will certainly engage with more deeply a little later.

It is also interesting how Penrose seems to suggest that a successful theory of quantum gravity could very well be a deterministic yet none computable theory, another section of discussion that I was eager to jump into immediately.

In the future, once I have considered the totality of The Road to Reality and thought more about the entire body of work on display, I would very much like to write a deeply engaged review and technical reading of this book as a whole (should time be permitted). After getting a taste of very specific engagements, I am already well on my way digesting all of its pages, from page 1 to 1045.

But in the present moment, I simply wanted to write a note about the book in an introductory way. In that some have described it as one of the most important books of the 20th century, I would very much have to agree. It is a truly foundational work, the sort of book that we rarely see these days with the standard of reduced and quick-to-consume literature that seems to move away from considerate, original and comprehensive thinking.

What I will say, too, is that regardless of whether one agrees or disagrees with certain interpretations presented by Penrose – for example, debates about his interpretation of the wave function – this is a book that every physics student should possess. It is an absolute must read. I think it is also a book for the general reader with some appreciation for mathematics and physics. I say this while also emphasising that The Road to Reality is more than a book that engages fundamentally with a study of the physical universe and the foundation of our best mathematical theories to date. It is a credit to Penrose’s general brilliance and comprehensive nature of thinking that allows it to also be magnificently philosophical in the best sense of philosophy, as he connects fundamental insights offered by key mathematical and physical theories with the study of the fundamental nature of knowledge, reality, and existence.

Standard
introduction calculus
Mathematics

How I taught myself calculus – Some intuitive reasoning

When I first taught myself calculus prior to more formal learning, I already had an idea of the concept of differentiation as a tool to help us measure the rate of change of a curve. I also already had some idea of higher differentials, a vague sense of integration, and some basic suspicions with regards to the meaning of the fundamental theorem of calculus, as I picked up on these concepts from various books on physics and higher mathematics. Similarly, my early interests in physics led to an awareness that there must be some way to measuring things like acceleration when it is not constant, and thus, graphically, not linear.

So there was always some idea of calculus and its connection, but how to prove it and come about truly understanding it?

When trying to understand some of its core concepts, I didn’t have rigorous formal proofs or derivations at hand. I did have a bit of guidance, but mostly I wanted to think through why calculus works and also why some of its deeper connections and applications make sense. I enjoy thinking from first principles, and I believe that mathematics is much more than ‘plug-and’chug’ formulas. To really appreciate and understand mathematical concepts and applications, first principle exercises are important. And so, in that a lot of my early points of entry were based on intuitive reasoning, the following video series shows how to build on knowledge of things like linear graphs and how the area under a line represents the distance traveled. The series reflects on some of my early thoughts on calculus, or at least some of the intuitive reasoning I used. It was fun to do, even for nostalgic purposes. But it may also prove useful for others learning the basics of differentiation and integration.

In future videos I will discuss more rigorous and formal proofs – there are a few that I know, and a couple of them are quite beautiful. In the future, I would also like to make a similar introductory series on multivariable calculus and maybe also another separate series on linear algebra.

Standard

Several months ago I made a video on a particular terminal access code decrypt problem that I had come across in No Man’s Sky (see above). The given sequence was 1 2 6 24 120. In order to decrypt the terminal and retrieve the valuable information, it was my job in the game to find the unknown 6th term. Can you work it out?

What made this particular sequence interesting, and worth commenting on, is how from a particular vantage point it would appear logically inconsistent. The appearance of a breakdown in logical is such that one might be led to believe that in order to find the unknown 6th term they would have to ignore the 1st term in the sequence. But as I explain, the sequence makes perfect sense in that the deeper realisation here concerns how and why 0!=1.

Mathematics

Euler’s Identity Proof

This quickly became one of my favourite derivations. And with the launch of my blog, I couldn’t wait to write about it.

First, an introduction. This is Euler’s formula: [e^{ix}= cos x +isin x] http://www.hostmath.com/Math/MathJax.js?config=OK

When we set x = pi, this formula evaluates to what is called Euler’s identity, $$e^{ipi}+ 1=0$$
http://www.hostmath.com/Math/MathJax.js?config=OK
Now, before any further explanation and before I actually present the proof (skip to the bottom, if you wish), I should like to pause and take a few moments to wax eloquently. Consider these words as part of the dramatic build up. Because what we have before us is, in my opinion, one of the most breathtaking examples of mathematical beauty.

If on first look Euler’s identity means nothing to you, that’s ok. In a few moments I’ll hopefully be able to assist in understanding why the theorem is so remarkable and deep. Meanwhile, we can preface the actual derivation by highlighting that the formula itself is to math and science what Mozart or Beethoven are to music. Physicist and one of my personal idols, Richard Feynman, once called the equation “our jewel” and “the most remarkable formula in mathematics”. Mathematics professor Keith Devlin has been quoted as describing Euler’s identity, “like a Shakespearean sonnet that captures the very essence of love, or a painting that brings out the beauty of the human form that is far more than just skin deep, Euler’s equation reaches down into the very depths of existence”.

This might seem like hyperbole, but Devlin’s description really does capture the essence of what is so significant about the theorem as possibly the most beautiful to have been discovered by human beings. It is so profound and important that, at least for me, it is possibly one of the genuinely deep historical human artifacts. I would place it alongside Maxwell’s equations and others.

What I would personally say, is that this particular theorem and its derivation (below) is an especially historical and immediately intuitive proof, which does offer greater depth to humanity and the project of human civilization. In that it links five fundamental mathematical constants, capturing the relationship between calculus (series expressions), trigonometry (sines and cosines), and the algebra of complex numbers (considering the exponentiation of a complex number), I would describe it as historical in significance because it represents a moment in which human beings might glimpse at some deeper truth of nature.

Indeed, it is so profound that there are even some people who, perhaps misguidedly, argue that the theorem proves the existence of god! Not that I am religious; nor am I a participant in religious cognition. It seems that human beings can find evidence for god or some other deity in almost anything, including a piece of toast. That said, it is not unreasonable to suggest that the mathematical conclusion that we’re about to explore suggests some order to mathematics and also to nature, especially if you think about the derivation in all its nuance. This is a purely objective suggestion.

In our limited human minds, it may only be a scratch at the surface. But this theorem would seem a remarkable fact to be discovered, much like any other law of nature. It certainly would seem to offer yet another example in the history of science and mathematics that highlights the objective mathematical character of reality. And if you don’t believe this, I highly suggest you study it for yourself.

Euler’s identity proof (Taylor series)

There are a number of ways to derive Euler’s identity. Some derivations are more elegant than others. The one I present is well known. And while not necessarily the nicest or most elegant or most rigorous, I think it is the one proof that is most deep and historical and dramatic. It also seems the most accessible, which is a good thing because I think more people need to appreciate the significance of the follow result.

For the sake of saving space, I am not going to be overly rigourous in this blog. I am not going to offer a complete and rigorous proof, and so one will have to trust some of my basic assumptions. I am also not going to explain some of the basic tools and concepts used. When I have some spare time, I would like to make a video on this derivation and lay it out in detail. Meanwhile, if at any point you don’t believe me, or wish for a fuller treatment, there are many articles and papers and videos freely available online. Sal Kahn already offers a decent seven-part series that serves as an accessible introduction.

To start, we should look to the Maclaurin series. The Maclaurin series is a special case of the Taylor series, which itself is a specific example of a power series. In simpler terms, one could think of the Taylor series as basically an extension of the logic of approximating functions with polynomials at any point (i.e., taking n derivatives).

This is the Maclaurin series, which I’ve written in latex:

Assuming one has a basic understanding of the above, we can begin by working out the series for sin x, cos x and e^z. Here are the basic expansions, which I’ve written using my digital blackboard:

 

You will have to take my word on the expansions. If you would like to understand how we arrive at these expansions, it is fairly easy to research on the web.

In looking at the image, notice the expansion for sin(x) and cos(x). Sin(x) shows x^n / n! for all the odd values of x. For the cos(x) expansion, we have x^n / n! for all the even values of x. And, for the expansion of e^x we see both odd and even values for x. So what is going on?

Let’s analyse by adding sin(x) to cos(x), which I’ve highlighted in magenta:

What you may have noticed is how sin(x) + cos(x) is similar to the expansion for e^x. In fact, it is clear that they have all the same terms. The only difference is the signs.

In sin(x) + cos(x) we see the pattern: positive positive, negative negative, positive positive, and so on. In the exponential expansion, each term is positive.

To make it more clear, see the following:

It’s really very peculiar that this pattern has emerged. Sin(x) and cos(x) have no direct connection to e^x. The former come from trigonometry. The latter from exponentiation. When we add the polynomial representations of the two fundamental trigonometric functions together, we arrive almost exactly at the polynomial representation of e^x.

Needless to say there is an indication here that something significant is at work, that we’re beginning to touch on something deep. And it’s around this point, when I first learned this derivation, that I began to get excited.

So why the curiously similar pattern? Let’s take another step forward.

To do that, we’re now going to consider the expansion of the function e^ix, where i is the imaginary unit. All we need to do here is substitute into our expansion for e^x, so that every time x appears in the expansion we substitute ix. The series expansion looks like this:

And now we can simplify, understanding the value of the imaginary unit i when raised to a power. And it turns out, there is yet another pattern in the signs. Notice how every time the imaginary unit appears (when we raise i to a power), it patterns the series for sin(x) in terms of the placement of positive and negative terms. In considering the final line in the expansion of e^ix below, now also notice the pattern of the signs when considering all of the series.

But we’re still not done. Because we can re-write e^ix by separating out the imaginary terms and the real terms.

Notice that for the real terms, this is actually the Maclaurin representation of cos(x). And notice, too, that for the imaginary terms, this is the Maclaurin representation for sin(x).

Very cool! But we’re still not done. Because here comes the moment – the awesome and awe-inspiring conclusion.

If we agree that what we end up with here is the series expansions for cos(x) and sin(x), which we discussed at the outset. And in understanding that like the series for cos(x) and sin(x), our real terms that we separated and our imaginary terms that we just separated out also have an infinite amount of terms, we can say that all the real terms converge with cos(x). Similarly, all the imaginary terms converge with sin(x).

Therefore, we can re-write what we have as follows:

And there we have it! Euler’s formula. An incredibly useful formula that, among other things, helps relate real numbers to imaginary numbers. It also has many useful applications. But with all that aside, this really is an amazing result. Not only have we found a relationship between e and the two fundamental trigonometric functions, but we’ve also found a relationship with the imaginary unit i. In short, we found a relationship with some of our most important mathematical constants.

But if that is not enough, there’s one more thing we can do. And it involves incorporating pi. In other words, we can also now raise e to the power of i times pi. The conclusion is magnificent.

It’s a breathtaking result. What we see here, among other things, are the most fundamental mathematical constants – each from completely different areas – all linked together! And while more rigorous proofs exist – in other words, while we can certainly prove the above result – understanding it and explaining it is a completely different thing. This is, really, one of the most astonishing mathematical results.

In the future, when I have some spare time, I’ll work through more formal and rigorous proofs. But for now, this is plenty to savor.

 

 

Standard

With the launch of my new mathematics blog, I thought I would start with something of a nostalgia post: a note on the quadratic formula.

The proof of the quadratic formula (by completing the square) was one of the first that I learned. When I originally worked through it and finally arrived at a derivation of the formula, it was for me one of those early mathematical moments that continued the growth of an already deep-seated interest. It’s very much similar in memory to various other instances of mathematical experience, such as when I look back to the time when I first taught myself calculus. Sentimentality is not the right word here; it’s more a remembrance of an early moment of mathematical passion and discovery.

It is by no means on the list of my favourite proofs – all of which I will write about in the future – but it’s fitting to have this derivation (at the bottom) as part of the early development of this blog.

Furthermore, I think that one of the wonderful things about mathematics (even very basic arithmetic) is how the derivation of something as simple and basic as the quadratic formula is actually quite beautiful. Some of the deeper connections we might make are also, philosophically, very inspiring. And even further, the anthropological dimension of its history and discovery is incredibly interesting.

Often it seems that as one advances their early mathematical career, lessons and thought experiments on first principles and proofs of whatever formula are increasingly absent. So too is the why of maths. In the case of formulae, first principles are often sort of left implicit – you know, here’s a formula and here’s how and when to use it. The why of mathematics seems to be left out, at least especially early on in one’s career.

A lot of science is emerging that backs the idea that the best way to learn is not by attending lectures – though they are useful – but by exercising one’s critical thinking skills. Exploring first principles, working through the logic of whatever particular derivation, and in reflecting on proofs helps build a foundational sense of the properties in practice and strengthens key intuition of why.

History

Before I actually get into the derivation, what is interesting to note is that thoughts around the development of this basic formula possess an interesting history. (I summarized these notes some time ago and I cannot locate the original source, otherwise I would link to the historical record). In short: Math historians often cite that, although the first attempts to find a more general formula to solve quadratic equations can be tracked back to geometry (and trigonometry) of Pythagoras and Euclid, the history of thinking actually dates as far back as approximately 2000 or so BC.

Some denote the thinking at this time as the original problem, wherein Egyptian, Babylonian and Chinese engineers encountered a question of some urgency: namely, how certain shapes must be scaled to a total area. In other words, there needed to be a way to measure the lengths of the sides of walls.

Anthropologically, one has to remember that this problem emerged shortly after the first signs of civilization began to formally develop in Mesopotamia, and thus with it the Bronze Age. With this there was an increase in agricultural production and all the rest, taking off from the Neolithic Revolution many years before. Storage of excess materials, grain and resources was an ongoing problem in this early and important period of development.

But these early engineers were very intelligent for their time. They knew how to find the area of a square with the length of a side. They also knew how to utilize squared spaces. But the sides and area of more complex shapes posed a significant problem. But then something important happening. In Egypt around the time of 1500BC the concept of completing the square was formulated to help solve very basic problems concerning area. It also appears later in Chinese records.

Then in 700 AD, Baskhara, a famous Indian mathematician of whom many may already be familiar, was the first to recognise that any positive number has two square roots. This followed by another derivation of the quadratic formula performed by Mohammad bin Musa Al-Khwarismi, a famous Islamic mathematician. The historical account is that this particular derivation was then brought to Europe some time later by Jewish mathematician/astronomer Abraham bar Hiyya. Some time later it was then picked up in 1545 by Girolamo Cardano, a Renaissance scientist. Here Al-Khwarismi’s solution was integrated with Euclidean geometry, which helped pave the way for the modern formulation.

Indeed, it was François Viète in 16th century France who would introduce what we now would consider as more recognizable notation. Then, the big work. The famous enlightenment thinker, René Descartes, penned La Géométrie, within which modern Mathematics was born. From out of this the quadratic formula as we know it today would emerge and be adopted.

Deriving the Formula

With some of the history noted, here’s my effort at a derivation of the formula (by completing the square).

proof of quadratic formula by completing the square_rcsmith

Mathematics

Binomial Expansion – A Neat Trick

There’s a neat trick when it comes to binomial expansion. Perhaps ‘trick’ is the wrong word. It seems more like an algorithmic approach to thinking about binomial expansion that doesn’t require Pascal’s triangle. It’s also mostly applicable to general scenarios where, for example, (1 + x)^n.

I was reminded of it the other day and thought it would make for a nice blog post.

In short, you basically just multiply the previous term’s index by the coefficient then divide by the term. If that doesn’t make much sense, see the images below. In general, I find it quicker than binomial theorem and the use of combinatorics and factorial notation. One can expand any binomial – and even of very high index – relatively quickly.

First, here’s binomial theorem (I don’t offer a proof in this post, though I could in the future upon request):

 

In understanding binomial theorem, we can now look to the following method. It’s nothing revolutionary. But it is still very cool.

binomial expansion without Pascal's triange

The caveat with this method, it seems, is that it is useful only within the following condition: mod x < 1/2.

In any case, it is interesting to think about it in relation to binomial theorem. The other day I was working on some problems for fun, as I like to do in my spare time. There were one or two book problems where binomial theorem proved more efficient, because the method above required more algebra (at least for my solution).

For instance, when (1-2x)^P is expanded, the coefficient of x^2 is 40. Given p > 0, find the value of the constant p. I found it was quicker for me to solve this using binomial theorem.

 

Standard