[latexpage]
There’s a wonderful video (see above) that has been circulating of Au nanoparticles merging under an electron transmission microscope (TEM). It’s not a new sight – TEM, scanning tunneling electron microscopes (STM), and others have been producing incredible images for some time. One of the most famous – certainly in terms of popular culture – is the one produced by IBM in the 1990s, showing individual atoms arranged in such a way as to spell the company’s name. Scanning electron micrographs have also captured in the past stunning images of various microbes and things likes like the human cochlea. But it is always nice to see such images emerge, from time to time, in popular media. And, really, it doesn’t matter how many times such images do emerge, the sight of whatever specimen being studied never feels any less remarkable. Electron microscopy offers a glimpse into a previously hidden world, and whether we’re talking physics or microbiology, the details of nature that we can now capture is nothing short of exciting.

IBM atoms

Image: ‘IBM scientists discovered how to move and position individual atoms on a metal surface using a scanning tunneling microscope’ / www-03.ibm.com

In the case of the video above, I’ve seen discussion at a number of venues, including where it was originally posted, and there seems to be some confusion.

What we see is, how under the right conditions, the particles, made of gold atoms, can fuse or merge forming a larger cluster of atoms. So what is going to enable this behaviour? In short, what you are observing are the two nanoparticles made of gold atoms as they move in high temperature on top of a surface of FeO (Iron(II) oxide). It is the combination of the high temperature in addition to the extra energy introduced by the electron beam (transmitted by the TEM) that provides that triggers the event. Extra energy is introduced to the specimen, and the two particles become excited in such a way that they begin moving. This is key. The atoms occupy a higher energy level, rearranging their configuration, thanks to the extra energy introduced to the specimen. That is why, toward the end of the video, as the attractive intermolecular forces between particles draw them together, the particles rearrange in the correct orientation of the lattices. This reconfiguration, if you will, allows for the smaller particle to amalgamate with the larger particle. In other words, once appropriately rearranged, the even more rapid process initiates where the smaller particle merges with the larger particle, and a very short time later, we see the single larger particle take on a rather pleasant symmetrical shape (crystalline lattice) due it now rearranging itself to a lower energy (stable) state.

It’s a great presentation of some of the basic physics of chemistry. Notice, for example, the lines on each particle, which represents their lattice structure on an atomic level.

Some very general and introductory explanation can be found in this video by Sixty Symbols. For the curious. A similar phenomenon can also be observed in a different video, this one produced by FELMI ZFE (see the top of the page). In this case, you’re again observing Au nanoparticles. At 600°C, you see them diffuse on an amorphous carbon support and exhibit Ostwald ripening. It’s a remarkable sight. Take note, again,  of some of the finer details, such as the texture on the particles – their atomic structure.

Physics Diary

Galileo’s Dialogues Concerning Two New Sciences

One afternoon, during a particularly difficult day, I found my way to Galileo’s Dialogues Concerning Two New Sciences. It was a PDF version of the 1914 edition translated by Crew and de Salvio that I had randomly stumbled upon. (It has been made freely available by archive.org).

I’ve learned that studying is one of the only things that brings me comfort and pleasure, and I found great joy and satisfaction in reading this edition of Two New Sciences. As I proceeded toward the work without definite aim, there were so many moments that emerged worthy of memory. One of my many favourite parts of the dialogues was Galileo’s account of the speed of light using flickering lanterns. But the dialogues are filled with many incredible moments. Take, for instance, some of the geometrical demonstrations, such as the theorem of how “the volumes of right cylinders having equal curved surfaces are inversely proportional to their altitudes”. Or the theorem presented by Galileo on the area of a circle as “a mean proportional between any two regular and similar polygons of which one circumscribes it and the other is isoperimetric with it”. [As an aside,  isoperimetric inequalities and ratios are very interesting. So, too, is the isoperimetric problem, which I’ve just begun digging into].

I am beginning to feel that bringing everything back to geometry is an important recourse we too often take for granted. The dialogues are fascinating in that they weave together and connect so many concepts and theories, like any great book – from geometry, ballistics, and acoustics to astronomy, the dialogues flow in a way that seems so rare today. Galileo’s presence, or voice, also emerges through the pages, and I find the work offers a rare opportunity to spend time with one of the great masters. Perhaps it is the clarity of the edition, but it is easy to follow Galileo from thought to thought, as though sitting beside him pondering some of the pressing physical questions of the 17th century. I like it, too, because of momentary engagements with his compatriots, even the philosopher Simplicio, a fictitious straw man, created to perform the perfect mediate that keeps the discussion between Sagredo and Galileo (Salviati) unfolding. One of Simplicio’s great passages is as follows:

What a sea we are gradually slipping into without knowing it! With vacua and infinities and
indivisibles and instantaneous motions, shall we ever be able, even by means of a
thousand discussions, to reach dry land? – Simplicio

It is a marvellous moment in the context of the first day of the dialogues, in which Galileo ponders the role of infinite numbers and issues pertaining to the Aristotelian school mechanics, among other things. It makes me think of some of the theoretical issues currently facing us in contemporary physics, as though, in some way, we’re continuously having to search for and reach dry land, and then, once we find it, the tide comes in a little bit more and pushes us a little bit further.

As a whole, it is obviously one of the great works ever produced by a human being, and certainly a work that anyone interested in physics or is studying to become a professional physicist ought to read. Galileo is one of those masterful scientists and philosophers that we hear about as kids, along with Newton, Franklin, and others. But encouragement to actually read his and other’s work would seem rare, and that is unfortunate.

Standard
Mathematics, Physics Diary, Stringy Things

The Road to Reality

I have been reading, working through and thinking a lot about Roger Penrose’s magnum opus, The Road to Reality: A Complete Guide to the Laws of the Universe.

It is a book that I cannot speak more highly about, possible one of the best books that I have read so far in my lifetime.

My first engagement with the book was a couple of months ago when I jumped immediately into specific sections of discussion spread throughout its 34 chapters. These sections were of immediate interest to me, already having an appreciation for Professor Penrose and already having some sense about certain parts of the book’s contents. For example, I was keen to learn more about and understand Penrose’s Twistor Theory. As Penrose acknowledges in The Road to Reality, it may not have completely worked out; but Twistor Theory is being used today in a number of interesting and exciting ways. One of my own points of intrigue here concerns the emergence of the amplituhedron, a fascinating geometrical object that incites in me an incredible level of excitement and intrigue as a young theoretical particle physics student.

For this reason, one section of the book that I had to immediately search out concerns Penrose’s discussion on the geometry of twistors and the twistor description of massless fields, as this has very much been in my thoughts. Additionally, I was incredibly eager to read his descriptions of Grassmannian space (among other similar things).

Penrose’s discussion regarding wave function collapse was another section I had ear marked.  On that note, it is interesting how he seems to fall on the side of objective collapse theory – and also the many worlds interpretation – as opposed to the more textbook Copenhagen interpretation. The more I study quantum mechanics in depth and the more I learn very broadly as a theoretical physics student, on the basis of the total evidence I have absorbed thus far, I too sway toward the many-worlds interpretation. I also find the Copenhagen interpretation less than satisfactory. Where I will eventually end up in such debates is to be determined, but I was interested in reading Penrose’s take on the matter. (I am also very intrigued by the GRW interpretation, which seems tantalizing in that in seeking to avoid the quantum measurement problem, it almost actively works toward a bridge between the classical and quantum worlds. Additionally, my early interests in wave-particle duality also led me to develop an interest in the Pilot Wave/de Broglie-Bohm interpretation).

In any case, Penrose seems to view the wave function as a physical wave, which runs contrary to the established view in many university courses (that I am aware of). I’m not entirely sure about the contents of Penrose’s arguments on this level, particularly his theory of duel fields, but it is something that I will certainly engage with more deeply a little later.

It is also interesting how Penrose seems to suggest that a successful theory of quantum gravity could very well be a deterministic yet none computable theory, another section of discussion that I was eager to jump into immediately.

In the future, once I have considered the totality of The Road to Reality and thought more about the entire body of work on display, I would very much like to write a deeply engaged review and technical reading of this book as a whole (should time be permitted). After getting a taste of very specific engagements, I am already well on my way digesting all of its pages, from page 1 to 1045.

But in the present moment, I simply wanted to write a note about the book in an introductory way. In that some have described it as one of the most important books of the 20th century, I would very much have to agree. It is a truly foundational work, the sort of book that we rarely see these days with the standard of reduced and quick-to-consume literature that seems to move away from considerate, original and comprehensive thinking.

What I will say, too, is that regardless of whether one agrees or disagrees with certain interpretations presented by Penrose – for example, debates about his interpretation of the wave function – this is a book that every physics student should possess. It is an absolute must read. I think it is also a book for the general reader with some appreciation for mathematics and physics. I say this while also emphasising that The Road to Reality is more than a book that engages fundamentally with a study of the physical universe and the foundation of our best mathematical theories to date. It is a credit to Penrose’s general brilliance and comprehensive nature of thinking that allows it to also be magnificently philosophical in the best sense of philosophy, as he connects fundamental insights offered by key mathematical and physical theories with the study of the fundamental nature of knowledge, reality, and existence.

Standard
Physics Diary

Electronegativity: Why does sodium chloride (NaCl) dissolve in water but not silicon dioxide?

[latexpage]
R.C. Smith

This was one of my favourite types of questions when first learning the physics of chemistry. The answer is that there are several reasons as to why sodium chloride (NaCl) dissolves in water but not silicon dioxide.

  1. i) Sodium chloride

Let’s start by first thinking about sodium chloride (NaCl), more commonly known as salt, or table salt.

NaCl is an ionic compound. In fact, we can describe it in terms of its giant ionic structure. The ionic structure of NaCl is so big, we don’t exactly know how many ions there are. Such giant ionic compounds as in NaCl potentially consist of billions or even trillions of sodium ions and chloride ions compressed or compacted together. For this reason, some describe it as an infinite, or endlessly repeating, lattice of ions, such that the number of ions depends on the size of the crystal.

Furthermore, the giant ionic structure of NaCl can be considered as quite different from more commonly structured molecules, which contain an exact recipe of atoms. This is why NaCl is considered as a classic case of ionic bonding, in which atoms transfer or share valence electrons. This can be visualized using a Lewis Diagram:

Sodium chloride has a strong ionic compound. To invoke physics, we can say that this is because there is a strong electrostatic force between the oppositely charged ions (cations and annions). This is generally true when metals (in this case, sodium) react with non-metals (in this case, chloride).

We can explain these electrostatic forces, as well as the strength of sodium chlorides’ ionic compound, by citing Coulomb’s Law. This law of physics describes the strength of the force between two static electrically charged particles.

[
f=Kefrac{q_1q_2}{r^2},
]

Despite its strong ionic compound, NaCl dissolves in H2O.  The reaction looks like this:

NaCl(solid) + H2O —> Na+(aqueous) + Cl(aqueous) + H2O

Notice that there is disassociation. The reason the sodium and chloride disassociate has to do with a number of factors:

Firstly, water is a polar molecule. In fact, it is very polar. This means that a water molecule has an asymmetrical arrangement of partial positive and partial negative charges that form polar bonds (below is a diagram that I have sketched for illustrative purposes).

As we know, water is made up of two hydrogen atoms and one oxygen atom. The atomic bonds in H2O are covalent bonds, which means that the electrons are shared. This results in the electrons that stay closer to the oxygen atom receiving a negative charge, while the hydrogen atom tends to have a positive charge.

NaCl, on the other hand, is made up of positive sodium ions and negative chloride ions. Hence, the polar ends of the water molecule attract their opposite charge parts of NaCl. More concisely, the positively charged water molecules attracts the negative chloride ions and the negatively charged water molecules attracts the positive sodium ions.

The reason salt dissolves in water is therefore due to how, the positively charged sodium ions are attracted to the negative polar area of the water molecule. Similarly, the negatively charged chloride ions are attracted to the positive polar area of the water molecule. These attractive forces with the water molecule overwhelm the forces between the positive sodium ions and the negative chloride ions, thus disassociation occurs and the ionic compound of NaCl goes into solution.

  1. ii) Silicon dioxide

But what about silicon dioxide (SiO2)?

Silicon dioxide has a giant covalent structure. This covalent structure, or macromolecules, is comprised of oxygen and silicon atoms.

The compound composed of silicon and oxygen to form SiO2 has a ratio of two oxygen atoms for every silicon atom. More concisely, each silicon atom covalently bonds to four oxygen atoms, while each oxygen atom is covalently bonds to two silicon atoms. In general, covalent bonds form when the element shares its four valence electrons, ns2np2, resulting in the formation of four covalent bonds (Clugston and Flemming, 2015).

Silicon dioxide, or silica, is very hard. Hence its diamond structure. This has to do with the strength of the covalent bonds, with oxygen atoms between each pair of silicon atoms. This strength depends largely on the electronegativity of the atoms insofar that electronegativity is the force between the electrons shared in the covalent bonding between the silicon and oxygen atoms.

Furthermore, SiO2 is not a molecule. It is a network covalent atomic solid. This giant lattice of covalently bonded atoms can be illustrated to look something like a 3D covalent network:

In that SiO2 is a network covalent atomic solid (under normal conditions), this giant covalent structure has very strong covalent bonds. These bonds are diffuse or spread throughout the structure.

The reason that there is strong electronegativity has to do with the oxygen atoms, which provide a stronger attractive force on the electrons than the silicon atoms, acquiring a partial negative charge. The electrons are also tightly compact, which means that SiO2 is not conductive (unless at molten temperature). On the other hand, SiO2 has high lattice energy.

All of this plays into why silicon dioxide is not soluble in water. In relation to bond in particular, SiO2, or silica sand, is insoluble because the attractive forces of the water molecules are not strong enough to break the covalent bonds between the silicon and oxygen atoms. In more precise terms, there is no attraction between the polar water molecules and the silicon or oxygen atoms due to the non-polarity of SiO2. This is because, despite the silicon-oxygen bonds being very polar, the geometry of the molecule – there are four silicon-oxygen bonds that cancel the dipole – means the dipole moments cancel resulting in non-polarity.

In conclusion, while sodium chloride (NaCl) dissolves in water due to the attractive forces with the polar water molecules overwhelming the forces between the positive sodium ions and the negative chloride ions, resulting in disassociation; silicon dioxide (SiO2) does not dissolve due to being a giant covalent structure in which the dipole moments cancel resulting in non-polarity.

References

Atkins, P., and De Paula, J. (2013). Elements of Physics Chemistry. Oxford University Press. Oxford, UK.

Chemguide: http://www.chemguide.co.uk/atoms/structures/giantcov.html

Clayden, J. Greeves, N., Warren, S. (2012). Organic Chemistry. Oxford University Press. Oxford, UK.

Clugston, M., and Flemming, R. (2015). Advanced Chemistry. Oxford University Press. Oxford, UK.

Hyperphysics: http://hyperphysics.phy-astr.gsu.edu/hbase/molecule/NaCl.html#c1

Lister, T. and Renshaw, J.. (2015). AQA Chemistry. Oxford University Press. Oxford, UK.

Weller, M., Overton, T., Rourke, J., and Armstrong, F. (2014). Inorganic Chemistry. Oxford University Press. Oxford, UK.

 

Standard
states of matter
Physics Diary

Law of conservation of energy and common states of matter

There are four common states of matter. Thinking about them raises some interesting points of insight regarding systems and the law of conservation energy.

R.C Smith

Considering a very simple – perhaps overly simple – example involving water, H2O, changing from solid to liquid and then to gas. Coinciding with these phase changes are a temperature range: from -40C – 0C, 0C-100C, and then finally 100C to 140C.

If one were to picture this in terms of the shape of a graph, we would see very clearly how the changes of state of water, from a solid when frozen to a gas when heated at boiling temperature, correlate with increases in temperature. With an increase in temperature of the water, the densities of the particles change and thus also the structure of the H2O molecules.

In total, there are four common states of matter according to the kinetic theory of matter: solids, liquids, gases, plasma. There is also a fifth state, which I will write about in the future, known as Bose-Einstein condensates. But in the particular example of this article, we are only considered three of the four phases (in order): solid, liquid and gas. The final phase, plasma matter, is not applicable here; although it is possible for water to enter a plasma state. In this case, the water (H2O) is split into hydrogen and oxygen, and the hydrogen and oxygen atoms experience ionisation (they turn into ions).

 

  1. i) Phase one (-40C – 0C): Solid

In relation to the temperature range described, at the point at phase one has heat been absorbed, the water is subject to -40C temperature and is therefore a solid. This is because, when water is subjected to temperatures below 0C, the freezing temperature, it will freeze; thus, turning into a solid. In this state, the arrangement of particles is in a regular pattern, as the particles are positioned closely together.

states of matter

Additionally, in a solid state, the water molecules have a relatively small amount of energy, which means that they are firmly constrained and held together in fixed positions. Understanding that the particles of matter experience forces of attraction, these inter-molecular forces are essentially strong enough to hold the particles together in fixed positions. In other words, in this solid state the water molecules do not possess enough energy to break from their bonds. Thus, what we see, is a solid (in this case in the form of ice) with a fixed volume.

But this notion of fixed position, as described above, can be deceiving. While the water molecules are understood to have a small amount of energy and are firmly constrained in this solid state, the kinetic theory of matter tells us that each particle is always in constant motion. In this case, the water particles are restricted but they are only restricted to a state of vibrational motion.

As an aside, one of the deeper intuitions here is that, due to the law of conservation of energy, the total energy of a closed system is constant. One of the implications of this law concerns how energy is never created or destroyed. It is simply transferred from one form to another, and this is a very important notion in physics (as in chemistry and other places). It helps describe, or at least offers a deeper intuition into our understanding, as to why the water particles in a solid state do not absolutely cease motion. They still contain energy – more precisely, kinetic energy – but do to the physics of water in a solid state, the only motion allowed is vibrational.  This vibrational motion is not enough to disrupt the structure of the solid.

But as we see when the temperature of the water increases from -40C to 0C, it approaches the melting point in which the water solid will experience significant change. With this change also follows the change in the structure of the particles.

 

  1. ii) Phase two (0C-100C): Liquid

In the second phase, where the water changes from solid to liquid, this correlates with an increase in the amount of heat absorbed. Additionally, we see the temperature of the water increase from 0C to 100C (the approximate boiling point), as it moves from melting point to boiling point.

It is important to notice the correlation between the gradual increase in the temperature of water and the gradual change of state. What we are witnessing is how, when the water solid is heated, the particles gain energy. In gaining energy, the rate of vibrational motion also increases. With this increase in vibrational motion, the structure of the water solid is gradually weakened; thus, one observes the expansion of the solid as it slowly changes to a liquid. This expansion is the result of how, in that solids have a notably fixed surface and volume (for reasons described in relation to phase one), when subject to heating the particle attraction decreases. The energy of the particles vibration motion intensifies, and this results in the particles moving farther apart. A consequence of this is an increase in volume and a decrease in density. Hence what one regularly observes when ice melts – there is, for a lack of better word, a dispersion, such as in the case of an ice cube being left of a desk at room temperature. As the water turns to liquid, it spreads across the desk.

This process continues as the water is gradually heated. In this case, where the temperature of the water increases from 0C to 100C, as the temperature climbs closer toward the boiling point, the heat transfers more and more energy until the particles break free from the inter-molecular forces holding them together. Finally, the water completely enters the liquid state, wherein the particles move further apart from one another with increased energy (than when in a solid).

Although there is increased distance from the particles, this is relative phrasing because they still remain close. The only difference is that they are now less restricted. Instead of touching and being fixed in position, the particles are allowed to pull apart and to move freely around each other.

Although the strength of inter-molecular forces decrease, which correlates with the molecules no longer being arranged in a regular pattern, forces of attraction remain and the molecules are now arranged randomly. Further, the intermolecular forces are such that the molecules remain close together, but they are no longer fixed and are less compressed.

states of matter

Notice, in reviewing the above image, the increase of space between the particles (now arranged randomly) in comparison when the water was a solid. This increase in the space between the particles correlates with a decrease in density from solid to liquid. To put it another way, the liquid is now diffuse.

 

iii) Phase three (100C-140C): Gas

In the third phase, the water, now a liquid, enters its boiling point. The process of boiling begins when the liquid begins to turn into a gas. At this point in the total time in which the temperature of the water has increased from -40C to 100C to 140C, we observe a significant increase in the amount of heat absorbed.

Interestingly, it is worth noting that the water remains at boiling point until significant heat absorption has been achieved, then a temperature increase is finally observed as the water begins to change from liquid state to gas.

In changing from liquid state to gas, the water liquid loses density. In other words, gases are much less dense than liquids, and so when the water changes from liquid to gas the water particles spread out even further (than when water changed from solid to liquid).

When the water liquid is heated, the increase in temperature at boiling point correlates with a continued increase in energy. Again, this refers to the law of conservation of energy. Hence, the water particles now vibrate more vigorously. The liquid then expands insofar that the particles further separate from each other. They gain so much energy that they can now break free from their attractive forces, and diffusion begins to occur more rapidly.

At this point, the particles can now move more quickly, as the forces of attraction become inconsistent among each particle. That is to say that the gas particles finally have sufficient energy to break free from the intermolecular forces that once held them together. Due to the lack of overwhelming particle attraction, the gas can spread out and fill its container.

states of matter

As illustrated in the image above, notice how the particles are more widely spaces than when a liquid or a solid. The particle arrangement also remains random, and because the gas particles can now move freely there is no order in the system. One could also say that, contrary to a solid, there is no fixed shape with a gas.

Although in this short article a very simple example was chosen in relation to H2O, it nevertheless offers incredible insight into common states of matter and how they relate to the law of conservation of energy. One might not immediately think of this incredible microworld that exists when simply looking at an ice cube melting on a table, or water boiling in a pot, but the nature of reality is incredibly rich in detail once we begin to look more closely.

Standard
Philosophy and General Reading, Physics Diary

Brownian Motion

R.C. Smith

In the past I have regularly written about and reflected on numerous examples with regards to the practice of science. Not only is this attached to my main, core interests as a physics student aspiring to become a good physicist and a good scientist, which inspires me to think deeply about science. I like to study the history of science as a medium for myself to reflect more broadly – and perhaps philosophically – on the development of scientific knowledge, the history of science in relation to this development and the fundamentals of its practice.

It often strikes me that, whether in science class or within the general domain of culture – there isn’t enough popular or widespread emphasis on the why of modern science. The same, I think, can be said of mathematics in particular. At the start of my mathematical career I was interested in the why of mathematics, but this isn’t usually the focus of our school lessons. Oftentimes, it is after we study mathematics to a high level that we then turn to thinking of the why of our mathematical concepts and systems. I think the same can be generally said of science.

But it is the why of science that reveals some of the deepest sources of scientific passion and inspiration. To neglect the why this is to not fully embrace the depth of meaning that modern science offers human beings. It is in the why of a scientific theory or of basic scientific knowledge that can enliven what today we might merely take for granted as general principles.

More than that, I often find joy in following the logic behind the development of a concept or the evolution of a theory on the basis of first principles – what led to the invention of the first microscope or to the discovery of penicillin? There is a lot of rich and interesting content here.

A nice example that I have pulled from my notebook concerns Brownian motion. The history behind Brownian motion is quite interesting.

In short and overly simple terms, many will already know that Brownian motion helped confirm that matter is made up of lots of tiny particles. In other words, it helped confirm the existence of atoms, which is the smallest particle of a chemical element that can exist. It is named after botanist Robert Brown, who in 1827 reached a most curious conclusion:

Translational motion.gif
By Greg L at the English language Wikipedia, CC BY-SA 3.0, Link

Brown was studying pollen grains at the time. Pollen grains are of course a very fine powder, that many will already be familiar with. If you rub your finger against the petal of a certain flower, you will be able to see the dust – the pollen particles – against your skin. One of Browns’s experiments entailed the study of pollen grains suspended in water. In placing the grains under his microscope, he noticed that pollen particles were moving almost at random. One might describe this movement as “jittery” or as a “zig-zag”. When Brown perceived the movement, he concluded that the grains were somehow “alive”.

However, what was really happening was that the grains were colliding with water molecules. And these water molecules were too small to see under Brown’s microscope, which led Brown to think that the pollen grains were “alive”.

It was only later, when the effects of the collisions could be seen – with better microscopes – that Brown’s observations could be deepened with scientific theory and then verified empirically, contributing to our understanding of particles.

From Ancient Rome to Einstein and Perrin

What is interesting, and why I like the example of Brownian motion (there are many great examples), is because the history of the idea of Brownian motion goes all the way back to Ancient Rome.

Lucretius1.png
By Unknownhttp://commons.wikimedia.org/wiki/File:Lucretius1.jpg, CC0, Link

His name was Titus Lucretius Carus, and he was a Roman poet and philosopher. In what is described on Wikipedia and elsewhere as a “scientific poem”, Lucretius, with tremendous phenomenological attention, offered a detailed account of Brownian motion of dust particles, from which he argued as proof of the existence of atoms. It comes from verses 113-140 in Book II, “On the Nature of Things” (c. 60 BC), and I offer a quote as cited on Wikipedia:

“Observe what happens when sunbeams are admitted into a building and shed light on its shadowy places. You will see a multitude of tiny particles mingling in a multitude of ways… their dancing is an actual indication of underlying movements of matter that are hidden from our sight… It originates with the atoms which move of themselves [i.e., spontaneously]. Then those small compound bodies that are least removed from the impetus of the atoms are set in motion by the impact of their invisible blows and in turn cannon against slightly larger bodies. So the movement mounts up from the atoms and gradually emerges to the level of our senses, so that those bodies are in motion that we see in sunbeams, moved by blows that remain invisible.”

It is, in simple terms, a perfect representation of an experiential observation. A common phenomenon, when concentrated beams of sunlight break through a glass window, and what we see,  caught in these beams, are lots of tiny dust particles floating in the air. I believe this to be one of my own ‘earliest scientific experiences’, observing dust particles as they passed through beams of sunlight, curious about their frantic and irregular patterns and wondering to myself, “what is this microworld?”

It is immediate, experiential observation and knowledge – the object of the particle of dust and its behaviour in gas. The dust particles move randomly. Why? Well, bracketing air current, which is one cause, the jittery motion of the dust particles is also caused by the resultant force of the air molecules – which are too small to see – in contact with the dust particles from all sides. In other words, Brownian dynamics.

But like with most ancient scientific observation and knowledge – largely limited to experiential epistemologies – we can move beyond Lucretius’ account. And in this history we see another example of the power of modern science.

Brown paved the way. But then, 1905, Brownian motion offered an important clue to Albert Einstein. Moreover, it was Einstein who gave an explanation of Brownian motion in terms of “random force”. Employing Newtonian mechanics (i.e., F=ma), the concept of random force helped explain the random or zig-zag or jittery motion of tiny particles, such as observed when pollen particles suspended in water. Another very basic but common and easy to-do experiment concerns capturing smoke in a transparent box, illuminating the smoke with a light, and then observing the smoke particles through a microscope.

In any case, it was Einstein who, in one of his most prolific years, explained why, experimentally, there must be a random force (that the clash between particles and molecules [from all sides of the particle] have a resultant force, causing in this case pollen particles to move in a jittery way).

Einstein also worked out that we can measure the diffusion constant. Diffussion can take on slightly different meanings in different disciplines. At its most basic, in physics, diffusion refers to the process that results from random motion of molecules.  It describes, as a result of molecules or atoms kinetic energy of random motion, the net movement from a region of high concentration to a region of low concentration. It can also be said that there is a frictional force. And in measuring the diffusion constant and the frictional force, Einstein discovered that we can find the kinetic energy of the particle – the amount of agitation of the particle – which can then be related to the absolute temperature of the fluid. Einstein’s theory thus resulted in an important contribution combining Newtonian mechanics and thermodynamics.

[Note: You could say that the direction of the force of atomic bombardment is constantly changing, and at different times the particle is hit more on one side than another, leading to the seemingly random nature of the motion.]

Later, in 1908, Einstein’s explanation of Brownian motion was verified experimentally by Jean Perrin, for which Perrin won the Noble Prize in Physics in 1926. Perrin’s confirmation of Einstein’s calculations provided significant empirical groundwork.

One could write pages on the finer details and expand on the historical chronicle, as well as deepen the explanation of the concepts, but even at this level of the history a remarkable picture of scientific pursuit and knowledge emerges. At the end of the sequence of discoveries and refutations, deepened theories and experimental verification, we arrive to an even more sharpened and expanded knowledge: convincing scientific evidence that atoms and molecules exist.

Standard
Physics Diary

A Moment with Newton

R.C. Smith

Over the summer break I set myself the task of reading through a fantastic recently published edition of the Principia. It has so far been immensely enjoyable. And, in the future, I do not doubt that I will dedicate an entire series of blog posts or essays to my working through Newton’s Principia.

principia - new translation

In the meantime, with each new page I’ve been reminded of my personal appreciation for Newton. Growing up, Sir Issac Newton was one of my idols. I remember my first introduction to the name, Newton, when I was no more than six or seven years old. It was in science class and the initial introduction was basic. It was left simply that Issac Newton was an important historical figure, who contributed much to science. In that vein, the presentation was no different than when I was first introduction to Benjamin Franklin, Thomas Jefferson, or Galileo Galilei. Looking back, I wish there was more emphasis in the curriculum to explore these figures and the openness of scientific inquiry shared among them (and other notable historical thinkers). There are very basic lessons, I think, that can be learned about the force or principles of the modern scientific endeavor in the study of some of its most significant figures. One learns that it is not just the individual, himself, but an entire history of human thought and equiry leading up that eureka moment. But, importantly, at a young age there is much inspiration to be gleaned from the story of Newton, Galileo, Maxwell, Faraday (one of my favourite biographies), Friedmann, Wigner, Einstein, and on and on. They all evidence an energy and passion for discovery, and arguably also an incredibly creativity in thought. Important values=s in early education, I would be inclined to argue.

I think it is plausible that most scientists, whether your a young student like myself or a seasoned physicist or biologist or whatever, experience one or two “wow!” moments early on that help foster an interest in the pursuit of science. Mine directly relates to Newton.

Aside from revolutionizing physics by unifying all of mechanics in three laws of motion, my eureka moment (if I can call it that) relates to Newton’s discovery – or inventing – of calculus. The formulation of the law of universal gravity, the eventual development of newtonian field theory – these are all amazing feats in the history of human thought. But it was Newton’s discovery of calculus – needed to solve the equations Newtonian mechanics produced – that really cemented my love for both mathematics and physics. And I suppose, in a very direct way, this existential moment of all embracing and inspiring “wow!” that struck me when I was younger also owes a debt to Gottfried Leibniz.

Why? Well, admittedly, my introduction to calculus was first through Newton; but it was the eventual realization, as I frankly sought to piece together the history of the development of calculus and its first principles, as well as that of classical physics, where two profound facts hit me in a way that I’ll never forget. Understanding that the development of calculus as well as the concept of vectors was, in a deeply important way, absolutely vital to the mechanics of the time, when one studies the history of mathematics ideas in and around this period there is a very coherent, and certainly observable, logical consistency in the build up to the final result. Indeed, and to preface the following with one more remark: it was intentional that Newton’s inventing of calculus was described earlier as a “discovery”. In short: it is understood given the demands of the direction of the new mathematics that had emerged, that without calculus and vectors, the very idea of instantaneous velocity, which we kind of take for granted today, would have been very difficult. We had arrived at a point, in the history of mathematics and more broadly in the history of human thought, that what was required was a mathematics of change – a mathematics that account for change. This was demanded by Newtonian mechanics. But the “wow!” moment, if you will, is in how not only did Newton discover calculus and its relation to the scientific study of nature. Independent from Newton, Gottfried Leibniz also developed calculus! And together, they are both responsible for one of those special and key historical moments of realization, where, in the study of the history of science and of ideas, one is confronted with the special relation between the fundamental disposition of the systematic human pursuit in maths and science and the study of Nature.

That both Newton, given the demands of his mechanics, and Leibniz at a similar point in human history, both discovered calculus suggests that calculus was very much the next logical step.

Now, depending on where one sits with regards to the debates around nominalism and platonism, and the epistemological arguments about the knowledge of abstract objects, this realization may be met with great intrigue or a simple shrug. In the case of the latter, if you don’t think numbers exist, that they do not form an integral or constituent part of object reality, you might simply state that calculus needed to be invented according to the logic of the system within which modern thinking operations, and thus too that all mathematics is human invention. But, if you’re like me, and you see mathematics as a constituent part of reality reaching all the way back to the Pythagorean argument, then the development of calculus is nothing short of awe-inspiring, as it was not so much the need of invention, but the next step in the study of the nature of reality. Indeed, it becomes all the more arousing if one considers Archimedes method of exhaustion to calculate the area under a parabola (anticipating modern integration) as far back as  287–212 BC.

There are many similar events throughout the history of physics and mathematics, where the same solution, conclusion or development was arrived at by independent parties. Likewise, in physics, one could also add the number of times theory has predicted precisely future measurements (beyond the experimental capabilities of the time), or when two independent theories arrive at the same objective outcome. Physics is incredibly rich in inspiration in this regard. Whenever I come across new examples, it always deepens for me the idea of the objective.

As Albert Einstein once remarked, ‘the idea that truth is independent to human beings is something I cannot prove but something I think is basic’. ‘The problem’, stated Einstein, ‘is the logic of continuity’. Max Tegmark, celebrated cosmologist, would even go so far to say that mathematics is reality. And this seems to be the emerging consensus, but I’ll save the arguments toward those ends for another time.

In closing, it was in 1665 that Newton began to develop his ideas on calculus. ‘Fluxions’, as he called it, related very directly to Newton’s early study of the laws of motion. Though I know the textbook version of Newton’s Laws of Motion and Newtonian mechanics like the back of my hand, I am only now starting to read through the Principia. My drive toward first principles and has me eager to read Opticks and Methodis Fluxionum (calculus), the latter published posthumously.

As arguably the world’s greatest physics, to which the likes of more recent modern heroes owe like Maxwell or Einstein owe a great debt, Newton was known as a master scientists. What’s most fascinating about the man, since we’re on the subject, is that he also had an extremely energized private interest in religious and mystical pursuits. Apparently, he produced a thousand page manuscript on his own theological studies, as well as a significant collection of thoughts on things like alchemy. But in public, from what I understand, he would not partake in such wild speculation, which leaves me to wonder whether there was ever a tension between his standards of scientific practice and personal orthodoxy. It reminds a little bit of perhaps one of the more famous stories about Einstein.

I would say, personally, that Einstein is the most creative, critically inquisitive and challenging scientist to have existed. His entire career was based on challenging the status quo. But even Einstein suffered from a personal moment of fallibility with respect to the human vulnerability to orthodoxy. As Max Tegmark (2014, p. 43) recites:

Einstein himself realized that a static universe uniformly filled with matter [Newton’s laws] didn’t obey his new gravity equations. So what did he do? Surely, he’d learned the key lesson from Newton to boldly extrapolate, figuring out what sort universe did obey his equations, and then asking whether there were observations that could test whether we inhabit such a universe. I find it ironic that even Einstein, one of the most creative scientists ever, whose trademark was questioning unquestioned assumptions and authorities, failed to question the most important authority of all: himself, and his prejudice that we live in an eternal unchanging universe.

To his credit, when further evidence emerged, Einstein admitted that adding this extra term in his equation to account for a static, eternal universe was his greatest blunder. It shows his remarkable character as an iconic scientist to admit to such a moment of prejudice, and critically analyze challenges toward that prejudice in an open and rational way. I like to think that, with all we now know, in our current moment of history, Newton’s response to challenges of his personal beliefs would have been the same.

Standard