Note: This is the first article in the three-article series on particle and fundamental physics. Here are the second and third articles.
I’ll be writing quite a bit about particle physics in this series of articles, so let’s get some basic ideas settled. Particle physics studies all “particles” that constitute nature – we’ll later see what exactly we mean when we say particles – for now we consider them to be tiny, really tiny building blocks with specific properties that make up our universe. Particles can be either “elementary” or “non-elementary”. Elementary particles are those that can’t be further broken down into smaller chunks. Non-elementary particles can be broken down, even if only in theory. (Readers who are familiar with chemistry might note that “elements” and “compounds” follow similar logic – elements can’t be further broken down, at least chemically. Non-chemists, fear not. We’ll make sense of stuff as we progress.)
Particle physics is also called high energy physics because the individual particles studied are often taken to high energies to understand their finer details better. It is a recurring theme that more energy applied to the systems yields better, clearer details regarding it. Thus, in this field, we see ridiculously large amounts of energy occurring per particle, just to shed more light on them.
This article delves into the history of particle physics, and why we consider it as an important field of study. So let’s begin with no further ado, from where it all began: atoms.
Index
Atoms were an ancient philosophical consideration. The word ‘atom‘ literally means indivisible, in ancient Greek. Several ancient cultures recognized in some form or the other that all we see around us must consist of small, identical “chunks”. These “chunks” were vaguely defined and based on philosophical grounds rather than any scientific process.
The identical nature was important because it implied that there was some form of uniformity on the teensy scale – if you compared the world to a building, the bricks that it consisted of were small and identical. However, ancient cultures either didn’t develop these theories much, because they had no way to test out these ideas or were not bothered to do so, or widely differed in their ideas. This meant that ancient atomism, as it was called, was not really a science in the modern sense of the term.
Some of the more interesting claims of this rudimentary atomic theory sound ridiculous now – for example, bitter taste was supposedly due to “sharp” atoms and sweetness due to “round” atoms. In any case, there were an equal number of philosophies that didn’t involve atoms in the past.
The philosophers had no reason to support atomic philosophies over non-atomic ones, because no one had evidence either way. All said and done, it was a matter of preference and belief, rather than a theory acceptable as per modern science.
Aristotle was a much respected Western philosopher. His influence was so much that most medieval and early modern philosophers and scientists were strong believers of his ideas, even though most were wrong. New ideas contradicting him took a lot of effort to take root.
For example, Galileo had to speak much against the ideas of Aristotle before his suggestions were taken seriously. Closer to the topic under consideration, Aristotle was a supporter of the non-atomic side of the matter. He believed that our universe was continuous, like a soup, rather than chunky, like rice. Hence, till at least the eighteenth century, not much progress was made on the atomic front, due to scientists trying to reconcile a lot of theories to Aristotle’s non-atomic view. Nevertheless, many partial atomic theories, more scientific than the ancient ones, were developed.
A lot of this work, by Descartes, Bacon and Gassendi, amongst others, was qualitative, and still not quantitative, unlike most modern science. The first significant steps on the quantitative side were taken by Johann Chysostom Magnenus, who devised an interesting experiment involving burning incense and checking how long it takes for the smell to spread in a church, to calculate the approximate number of atoms in a little bit of incense. The number, 1018, was reasonably close to modern calculations. Further progress was made by Roger Boscovich, who developed mechanical ideas of atoms in context of collisions.
The most prominent and recognizable early theory of atoms was laid down by John Dalton, a chemist. He noted how mass is the same throughout a chemical reaction. He also observed many chemical reactions and noted how certain compounds always reacted in fixed ratios. To explain all this, he said that all substances consist of atoms. Differences in atoms make the substances different. This is similar to saying the Taj Mahal is different from the Red Fort because one is made up of marble while the other is made up of red sandstone.
This was not all. Dalton also said that chemical reactions involve redistributions of the constituent atoms, and that atoms of one substance are all identical. While this sounds very qualitative, it did explain two very important chemical laws – the law of conservation of mass (mass is neither created nor destroyed in a reaction), and the law of definite proportions (substances combine in fixed ratios). Again, there were some deficiencies in this theory. It didn’t explain why the atoms of one substance differ from another. Nor did it explain whether the atom had an internal structure, something that was a matter of scientific curiosity. However, it was a landmark moment as it led to slow acceptance of atomic ideas into modern scientific thought. These ideas were taken further by the work of several chemists and physicists such as Boltzmann, Avogadro and others.
“Protons give an atom its identity, electrons its personality.” ― Bill Bryson Click To TweetA range of experiments in the late 19th and early 20th centuries made it obvious that the atom was not as indivisible as its name suggested. The same experiments also made the internal structure of atoms clearer.
The earliest was the discovery of so-called “cathode rays”. These negatively charged rays were generated by applying high electric voltages over gases in sealed tubes. Analysis suggested that these rays must consist of particles, later termed electrons, the first subatomic particle discovered in 1897 by J.J. Thompson. A wide array of further experiments divulged a lot of information regarding electrons, such as their charge, mass, and so on.
Similar experiments revealed a kind of positively charged rays, which were later identified to be protons (in 1914) by Ernest Rutherford. The very same Rutherford also attained fame for his model of the atom (though it was later refined by Bohr and others). With the discovery of the electrically neutral neutron in 1932 by Chadwick, it seemed that most of the atom was understood.
To summarize: most of the mass of the atom was in the centre, called the nucleus, which consisted of neutral neutrons and positively charged protons. The electrons, equal in number to the protons, were distributed around the nucleus and formed the edge of the atom. The negative charge of the electrons, and the positive charge of protons were equal in a neutral atom, so net charge was zero. The details of this distribution were a part of the study of quantum chemistry. Understanding how these electrons distributed and interacted helped explain all chemical reactions and stability.
Looks rosy, doesn’t it? All we have are three fundamental, supposedly elementary particles – the electron, the proton, and the neutron – and their interaction explained pretty much all of modern physics and chemistry.
However, the real picture wasn’t quite so simple. Everything is not as it seems.
Around the time of development of quantum mechanics (1920s), scientists began to realize that reality is not as clear cut as we see it. One of the many pieces of evidence convincing scientists of this was wave-particle duality. Avoiding a lot of complicated terminology, it means that matter can behave as a wave or particle depending on the situation. For example, light, usually considered a wave, can also act like a particle at times. Similarly, the electron, usually considered a particle, can behave like a wave in some circumstances.
While this had an impact in many fields, the most obvious one for particle physicists was that there were more new particles. In addition to the former three , the electron, proton, and neutron, the photon, the light particle, joined the list. The work of Hertz, Planck and Einstein in the early 20th century helped establish the particle nature of light.
Now, further complications arose because light was basically an electromagnetic occurrence. The basic idea was that the electromagnetic forces, seen between charged bodies, were connected to light. Now with light consisting of particles, an implication was that this force was somehow “carried” by the photons. This meant that other fundamental forces, such as gravity, should probably have their own carriers too.
One such very interesting fundamental force was the creatively named “strong force”. First, some background. The nucleus, as we have seen, is crowded with positively charged protons. In electromagnetism, similar charges repel and opposite charges attract. Given the size of the nucleus, the protons should be repelling each other with enormous force. Yet the nucleus stayed together. How? The “strong force” was the solution scientists suggested. Now, just like the photon for electromagnetic forces, there must be a carrier particle for the strong force. This was theoretically analyzed by Yukawa in the 1930s even before its discovery, and was named the meson. Later, studies in the 1940s confirmed the existence of the Yukawa meson.
Another force was the “weak force”. It is involved in breakdown of radioactive materials, and just like the other forces we’ve seen so far, must have a carrier particle too. This was theoretically, again, considered way before its actual discovery in 1983. In fact, there were three, not one particles serving the function and were called intermediate vector bosons and named the W+, W– and Z particles.
Neutrinos, another category of elusive particles, were discovered in the study of the same process of radioactive decay. They were first suggested by Pauli in the early 1930s as a purely theoretical idea to explain some results of radioactive decay. They were so compelling, however, that scientists took the search for their existence seriously. It was not till the 1950s that actual evidence for this was obtained in a series of experiments by Cowan and Reines. Once found, however, neutrinos joined electrons and muons in a category of particles called leptons, particles with relatively light mass.
The same time period (1930-50) saw the discovery of antiparticles – particles which corresponded to known particles but had some characteristics such as charge, etc inverted. A sort of evil twin of the “normal” particles, if you could put it that way. The first antiparticle discovered was the positron, discovered in 1931. It is the antiparticle of an electron. Later, several others (antiproton and antineutron) followed suit and even antimatter, formed entirely of antiparticles, became a possibility. Antiparticles and particles annihilate each other in a destructive process, but doing so also release enormous amounts of energy, so they pose equally huge risks and rewards.
In this intermediate period of the 1950s, there was an explosion of heavier particles, and antiparticles too. Kaons, pions, lambda particles, eta particles and so many more. Most of these particles fell into the category of baryons, heavy particles, and mesons, or middle-range particles in terms of mass. Their discovery was due to extensive involvement of newly discovered particle accelerators, which took particles to very high velocities before making them collide.
The resultant collisions broke down the particles, which were analyzed by their tracks, and gave rise to many new ones, unexpected and undocumented. This not only cast doubt on whether protons and neutrons were elementary (could they be further broken down?), but there were many other particles that scientists struggled to explain or classify. What was an elementary particle? How do we classify all these particles? Scientists had no easy answers then. In fact, the situation was so chaotic that while it used to be that a person discovering a new particle a few decades ago would be considered for a Nobel Prize, now doing the same should probably be deterred by a fine. Particle physics was a messy attic full of unorganized items and desperately needed sorting.
While the situation was scary at this point, there was light at the end of the tunnel. All the particles, as well as antiparticles so far, could be classified into three categories, roughly, based on mass, and a few other characteristics, as baryons, mesons, and leptons, in decreasing order of mass. The lepton situation, consisting of electrons, muons, neutrinos and so forth, was reasonably good and did not require much classification.
For the baryons and mesons, a classification system fell into place in between 1961 and 1964 thanks to the efforts of Murray Gell-Mann (and separately by Ne’eman). Called the Eightfold Way, it was a complex yet surprisingly beautiful system that organized the baryons and mesons in interesting geometric arrangements that reflected their properties.
While this was a useful approach, it still didn’t answer the question of which particles were “elementary” and which were not. Clearly leptons were, but what about the hadrons, that is, baryons and mesons collectively? There were just too many of them for one to be certain which were elementary, if at all. Thankfully, a solution was arrived at in 1964, again by Gell-Mann, who proposed that all baryons and mesons are made up of elementary particles called “quarks” held together by other particles called “gluons“.
A special property called “quark confinement” means that quarks have not been observed existing freely in nature. However, indirect studies in the late 1960s provided strong evidence in favour of the existence of quarks, and quarks were well-established. The theory of quantum chromodynamics (QCD) deals with quarks and gluons and all their interactions.
If you found this section has been a bit jargon-heavy, you’re not the only one! Scientists found these ideas of classifying particles very confusing in the beginning. I’ll explain more in the next article as I explore the Standard Model.
With the work of Gell-Mann, the beginning of quantum chromodynamics and the discovery of intermediate vector bosons in 1983, most of the particle physics was in an orderly state in the 1980s. Scientists went so far as to devise the Standard Model – a theory that explained most of the Universe in terms of 61 elementary particles and antiparticles as well as their properties and interactions. The Standard Model is a rich topic which I will discuss in detail in my next article. Here though, I will briefly mention its importance. It has stood the test of time and has even predicted the existence of the Higgs Boson, observed in 2012. It is largely consistent and spans three of the four fundamental forces – gravity being an exception, and even unifies the electromagnetic and weak forces. It is the most complete explanation of the universe particle physicists have to date.
There are some issues that are yet unresolved – the inclusion of gravity and the explanation of dark matter are two prominent ones. However, the future looks hopeful for us particle physicists. There are many fields rich with problems and mysteries for us to tackle. In the process, we try to unravel the enigma of the universe and make sense of it. We hope to get closer to the truth and one day be able to explain the universe with one consistent theory.
So, what next? My upcoming article will explain the basics of the Standard Model and classification of particles in greater detail. I will also provide a more concrete definition of what is a particle in that process. Readers will have noticed that I’ve kept the definition loose so far, to avoid imposing an artificial sense of strictness without actually knowing the matter at hand.
If history has any lessons for us particle physicists, it is that what appears to be the truth at one point could be completely tossed upside down with new discoveries. Driven by curiosity, our job is not to cling to old theories but to accept new facts and develop better explanations. After all, to use an analogy, we are players watching a board game of whose rules we have no idea of. With every observation of ours, we try to improve our understanding of the rules of the game. And every time an unexpected move occurs, we edit our rulebook and incorporate new rules and take out old ones. This is what any particle physicist, and scientist in general, does.
Implicit differentiation is the main type of differential calculus. It is widely used to find…
Boolean algebra is derived from algebra which is one of the major branches of mathematics.…
Edavaleth Kakkat Janaki Ammal is considered a pioneer in Botany who worked on plant breeding,…
Daulat Singh Kothari was an eminent Indian scientist and great educationist. He is highly appreciated for…
Anna Mani (Anna Modayil Mani) was an Indian physicist and a distinguished meteorologist. She was…
G.D. Naidu or Gopalswamy Doraiswamy Naidu, fondly remembered as “Edison of India” and "The Wealth…
View Comments
Very discriptive
Hiii Samudra good work dear keep it up God bless you.