Part of a series on |
Quantum mechanics |
---|
Quantum mechanics (QM; also known as quantum physics, quantum theory, the wave mechanical model, or matrix mechanics), including quantum field theory, is a fundamental theory in physics which describes nature at the smallest – including atomic and subatomic – scales.[2]
Classical physics, the description of physics existing before the formulation of the theory of relativity and of quantum mechanics, describes nature at ordinary (macroscopic) scale. Most theories in classical physics can be derived from quantum mechanics as an approximation valid at large (macroscopic) scale.[3] Quantum mechanics differs from classical physics in that energy, momentum, angular momentum, and other quantities of a bound system are restricted to discrete values (quantization), objects have characteristics of both particles and waves (wave-particle duality), and there are limits to how accurately the value of a physical quantity can be predicted prior to its measurement, given a complete set of initial conditions (the uncertainty principle).[note 1]
Quantum mechanics gradually arose from theories to explain observations which could not be reconciled with classical physics, such as Max Planck's solution in 1900 to the black-body radiation problem, and from the correspondence between energy and frequency in Albert Einstein's 1905 paper which explained the photoelectric effect. Early quantum theory was profoundly re-conceived in the mid-1920s by Erwin Schrödinger, Werner Heisenberg, Max Born and others. The modern theory is formulated in various specially developed mathematical formalisms. In one of them, a mathematical function, the wave function, provides information about the probability amplitude of energy, momentum, and other physical properties of a particle.
Contents
History[edit]
Modern physics |
---|
Scientific inquiry into the wave nature of light began in the 17th and 18th centuries, when scientists such as Robert Hooke, Christiaan Huygens and Leonhard Euler proposed a wave theory of light based on experimental observations.[5] In 1803, English polymath Thomas Young described the famous double-slit experiment.[6] This experiment played a major role in the general acceptance of the wave theory of light.
In 1838, Michael Faraday discovered cathode rays. These studies were followed by the 1859 statement of the black-body radiation problem by Gustav Kirchhoff, the 1877 suggestion by Ludwig Boltzmann that the energy states of a physical system can be discrete, and the 1900 quantum hypothesis of Max Planck.[7] Planck's hypothesis that energy is radiated and absorbed in discrete "quanta" (or energy packets) precisely matched the observed patterns of black-body radiation.
In 1896, Wilhelm Wien empirically determined a distribution law of black-body radiation,[8] known as Wien's law in his honor. Ludwig Boltzmann independently arrived at this result by considerations of Maxwell's equations. However, it was valid only at high frequencies and underestimated the radiance at low frequencies. Later, Planck corrected this model using Boltzmann's statistical interpretation of thermodynamics and proposed what is now called Planck's law, which led to the development of quantum mechanics.
Following Max Planck's solution in 1900 to the black-body radiation problem (reported 1859), Albert Einstein offered a quantum-based theory to explain the photoelectric effect (1905, reported 1887). Around 1900–1910, the atomic theory but not the corpuscular theory of light[9] first came to be widely accepted as scientific fact; these latter theories can be viewed as quantum theories of matter and electromagnetic radiation, respectively. Although the atom was accepted as existing, for a decade after 1905, the photon theory was rejected by most. Even up to the time of Einstein's Nobel Prize, Niels Bohr did not believe in the photon.[10]
Among the first to study quantum phenomena in nature were Arthur Compton, C. V. Raman, and Pieter Zeeman, each of whom has a quantum effect named after him. Robert Andrews Millikan studied the photoelectric effect experimentally, and Albert Einstein developed a theory for it. At the same time, Ernest Rutherford experimentally discovered the nuclear model of the atom, for which Niels Bohr developed his theory of the atomic structure, which was later confirmed by the experiments of Henry Moseley. In 1913, Peter Debye extended Niels Bohr's theory of atomic structure, introducing elliptical orbits, a concept also introduced by Arnold Sommerfeld.[11] This phase is known as old quantum theory.
According to Planck, each energy element (E) is proportional to its frequency (ν):
- ,
where h is Planck's constant.
Planck cautiously insisted that this was simply an aspect of the processes of absorption and emission of radiation and had nothing to do with the physical reality of the radiation itself.[12] In fact, he considered his quantum hypothesis a mathematical trick to get the right answer rather than a sizable discovery.[13] However, in 1905 Albert Einstein interpreted Planck's quantum hypothesis realistically and used it to explain the photoelectric effect, in which shining light on certain materials can eject electrons from the material. He won the 1921 Nobel Prize in Physics for this work.
Einstein further developed this idea to show that an electromagnetic wave such as light could also be described as a particle, or quanta (later called the photon), with a discrete quantum of energy that was dependent on its frequency.[14] In his paper “On the Quantum Theory of Radiation,” Einstein expanded on the interaction between energy and matter to explain the absorption and emission of energy by atoms. Although overshadowed at the time by his general theory of relativity, this paper articulated the mechanism underlying the stimulated emission of radiation,[15] which became the basis of the laser.
The foundations of quantum mechanics were established during the first half of the 20th century by Max Planck, Niels Bohr, Werner Heisenberg, Louis de Broglie, Arthur Compton, Albert Einstein, Erwin Schrödinger, Max Born, John von Neumann, Paul Dirac, Enrico Fermi, Wolfgang Pauli, Max von Laue, Freeman Dyson, David Hilbert, Wilhelm Wien, Satyendra Nath Bose, Arnold Sommerfeld, and others. The Copenhagen interpretation of Niels Bohr became widely accepted.
In the mid-1920s, developments in quantum mechanics led to its becoming the standard formulation for atomic physics. In the summer of 1925, Bohr and Heisenberg published results that closed the old quantum theory. Out of deference to their particle-like behavior in certain processes and measurements, light quanta came to be called photons (1926). In 1926 Erwin Schrödinger suggested a partial differential equation for the wave functions of particles like electrons. And when effectively restricted to a finite region, this equation allowed only certain modes, corresponding to discrete quantum states – whose properties turned out to be exactly the same as implied by matrix mechanics.[16] From Einstein's simple postulation was born a flurry of debating, theorizing, and testing. Thus, the entire field of quantum physics emerged, leading to its wider acceptance at the Fifth Solvay Conference in 1927.[citation needed]
It was found that subatomic particles and electromagnetic waves are neither simply particle nor wave but have certain properties of each. This originated the concept of wave–particle duality.[citation needed]
By 1930, quantum mechanics had been further unified and formalized by the work of David Hilbert, Paul Dirac and John von Neumann[17] with greater emphasis on measurement, the statistical nature of our knowledge of reality, and philosophical speculation about the 'observer'. It has since permeated many disciplines, including quantum chemistry, quantum electronics, quantum optics, and quantum information science. Its speculative modern developments include string theory and quantum gravity theories. It also provides a useful framework for many features of the modern periodic table of elements, and describes the behaviors of atoms during chemical bonding and the flow of electrons in computer semiconductors, and therefore plays a crucial role in many modern technologies.[citation needed]
While quantum mechanics was constructed to describe the world of the very small, it is also needed to explain some macroscopic phenomena such as superconductors,[18] and superfluids.[19]
The word quantum derives from the Latin, meaning "how great" or "how much".[20] In quantum mechanics, it refers to a discrete unit assigned to certain physical quantities such as the energy of an atom at rest (see Figure 1). The discovery that particles are discrete packets of energy with wave-like properties led to the branch of physics dealing with atomic and subatomic systems which is today called quantum mechanics. It underlies the mathematical framework of many fields of physics and chemistry, including condensed matter physics, solid-state physics, atomic physics, molecular physics, computational physics, computational chemistry, quantum chemistry, particle physics, nuclear chemistry, and nuclear physics.[21][better source needed] Some fundamental aspects of the theory are still actively studied.[22]
Quantum mechanics is essential to understanding the behavior of systems at atomic length scales and smaller. If the physical nature of an atom were solely described by classical mechanics, electrons would not orbit the nucleus, since orbiting electrons emit radiation (due to circular motion) and would quickly collide with the nucleus due to this loss of energy. This framework was unable to explain the stability of atoms. Instead, electrons remain in an uncertain, non-deterministic, smeared, probabilistic wave–particle orbital about the nucleus, defying the traditional assumptions of classical mechanics and electromagnetism.[23]
Quantum mechanics was initially developed to provide a better explanation and description of the atom, especially the differences in the spectra of light emitted by different isotopes of the same chemical element, as well as subatomic particles. In short, the quantum-mechanical atomic model has succeeded spectacularly in the realm where classical mechanics and electromagnetism falter.
Broadly speaking, quantum mechanics incorporates four classes of phenomena for which classical physics cannot account:
- quantization of certain physical properties
- quantum entanglement
- principle of uncertainty
- wave–particle duality
Mathematical formulations[edit]
In the mathematically rigorous formulation of quantum mechanics developed by Paul Dirac,[24] David Hilbert,[25] John von Neumann,[26] and Hermann Weyl,[27] the possible states of a quantum mechanical system are symbolized[28] as unit vectors (called state vectors). Formally, these vectors are elements of a complex separable Hilbert space – variously called the state space or the associated Hilbert space of the system – that is well defined up to a complex number of norm 1 (the phase factor). In other words, the possible states are points in the projective space of a Hilbert space, usually called the complex projective space. The exact nature of this Hilbert space is dependent on the system – for example, the state space for position and momentum states is the space of square-integrable functions, while the state space for the spin of a single proton is just the product of two complex planes. Each observable is represented by a maximally Hermitian (precisely: by a self-adjoint) linear operator acting on the state space. Each eigenstate of an observable corresponds to an eigenvector of the operator, and the associated eigenvalue corresponds to the value of the observable in that eigenstate. If the operator's spectrum is discrete, the observable can attain only those discrete eigenvalues.
In the formalism of quantum mechanics, the state of a system at a given time is described by a complex wave function, also referred to as state vector in a complex vector space.[29] This abstract mathematical object allows for the calculation of probabilities of outcomes of concrete experiments. For example, it allows one to compute the probability of finding an electron in a particular region around the nucleus at a particular time. Contrary to classical mechanics, one can never make simultaneous predictions of conjugate variables, such as position and momentum, to arbitrary precision. For instance, electrons may be considered (to a certain probability) to be located somewhere within a given region of space, but with their exact positions unknown. Contours of constant probability density, often referred to as "clouds", may be drawn around the nucleus of an atom to conceptualize where the electron might be located with the most probability. Heisenberg's uncertainty principle quantifies the inability to precisely locate the particle given its conjugate momentum.[30]
According to one interpretation, as the result of a measurement, the wave function containing the probability information for a system collapses from a given initial state to a particular eigenstate. The possible results of a measurement are the eigenvalues of the operator representing the observable – which explains the choice of Hermitian operators, for which all the eigenvalues are real. The probability distribution of an observable in a given state can be found by computing the spectral decomposition of the corresponding operator. Heisenberg's uncertainty principle is represented by the statement that the operators corresponding to certain observables do not commute.
The probabilistic nature of quantum mechanics thus stems from the act of measurement. This is one of the most difficult aspects of quantum systems to understand. It was the central topic in the famous Bohr–Einstein debates, in which the two scientists attempted to clarify these fundamental principles by way of thought experiments. In the decades after the formulation of quantum mechanics, the question of what constitutes a "measurement" has been extensively studied. Newer interpretations of quantum mechanics have been formulated that do away with the concept of "wave function collapse" (see, for example, the relative state interpretation). The basic idea is that when a quantum system interacts with a measuring apparatus, their respective wave functions become entangled, so that the original quantum system ceases to exist as an independent entity. For details, see the article on measurement in quantum mechanics.[31]
Generally, quantum mechanics does not assign definite values. Instead, it makes a prediction using a probability distribution; that is, it describes the probability of obtaining the possible outcomes from measuring an observable. Often these results are skewed by many causes, such as dense probability clouds. Probability clouds are approximate (but better than the Bohr model) whereby electron location is given by a probability function, the wave function eigenvalue, such that the probability is the squared modulus of the complex amplitude, or quantum state nuclear attraction.[32][33] Naturally, these probabilities will depend on the quantum state at the "instant" of the measurement. Hence, uncertainty is involved in the value. There are, however, certain states that are associated with a definite value of a particular observable. These are known as eigenstates of the observable ("eigen" can be translated from German as meaning "inherent" or "characteristic").[34]
In the everyday world, it is natural and intuitive to think of everything (every observable) as being in an eigenstate. Everything appears to have a definite position, a definite momentum, a definite energy, and a definite time of occurrence. However, quantum mechanics does not pinpoint the exact values of a particle's position and momentum (since they are conjugate pairs) or its energy and time (since they too are conjugate pairs). Rather, it provides only a range of probabilities in which that particle might be given its momentum and momentum probability. Therefore, it is helpful to use different words to describe states having uncertain values and states having definite values (eigenstates).
Usually, a system will not be in an eigenstate of the observable (particle) we are interested in. However, if one measures the observable, the wave function will instantaneously be an eigenstate (or "generalized" eigenstate) of that observable. This process is known as wave function collapse, a controversial and much-debated process[35] that involves expanding the system under study to include the measurement device. If one knows the corresponding wave function at the instant before the measurement, one will be able to compute the probability of the wave function collapsing into each of the possible eigenstates.
For example, the free particle in the previous example will usually have a wave function that is a wave packet centered around some mean position x0 (neither an eigenstate of position nor of momentum). When one measures the position of the particle, it is impossible to predict with certainty the result.[31] It is probable, but not certain, that it will be near x0, where the amplitude of the wave function is large. After the measurement is performed, having obtained some result x, the wave function collapses into a position eigenstate centered at x.[36]
The time evolution of a quantum state is described by the Schrödinger equation, in which the Hamiltonian (the operator corresponding to the total energy of the system) generates the time evolution. The time evolution of wave functions is deterministic in the sense that – given a wave function at an initial time – it makes a definite prediction of what the wave function will be at any later time.[37]
During a measurement, on the other hand, the change of the initial wave function into another, later wave function is not deterministic, it is unpredictable (i.e., random). A time-evolution simulation can be seen here.[38][39]
Wave functions change as time progresses. The Schrödinger equation describes how wave functions change in time, playing a role similar to Newton's second law in classical mechanics. The Schrödinger equation, applied to the aforementioned example of the free particle, predicts that the center of a wave packet will move through space at a constant velocity (like a classical particle with no forces acting on it). However, the wave packet will also spread out as time progresses, which means that the position becomes more uncertain with time. This also has the effect of turning a position eigenstate (which can be thought of as an infinitely sharp wave packet) into a broadened wave packet that no longer represents a (definite, certain) position eigenstate.[40]
Some wave functions produce probability distributions that are constant, or independent of time – such as when in a stationary state of definite energy, time vanishes in the absolute square of the wave function (this is the basis for the energy-time uncertainty principle). Many systems that are treated dynamically in classical mechanics are described by such "static" wave functions. For example, a single electron in an unexcited atom is pictured classically as a particle moving in a circular trajectory around the atomic nucleus, whereas in quantum mechanics, it is described by a static, spherically symmetric wave function surrounding the nucleus (Fig. 1) (however, only the lowest angular momentum states, labeled s, are spherically symmetric.)[41]
The Schrödinger equation acts on the entire probability amplitude, not merely its absolute value. Whereas the absolute value of the probability amplitude encodes information about probabilities, its phase encodes information about the interference between quantum states. This gives rise to the "wave-like" behavior of quantum states. As it turns out, analytic solutions of the Schrödinger equation are available for only a very small number of relatively simple model Hamiltonians, of which the quantum harmonic oscillator, the particle in a box, the dihydrogen cation, and the hydrogen atom are the most important representatives. Even the helium atom – which contains just one more electron than does the hydrogen atom – has defied all attempts at a fully analytic treatment.
There exist several techniques for generating approximate solutions, however. In the important method known as perturbation theory, one uses the analytic result for a simple quantum mechanical model to generate a result for a more complicated model that is related to the simpler model by (for one example) the addition of a weak potential energy. Another method is the "semi-classical equation of motion" approach, which applies to systems for which quantum mechanics produces only weak (small) deviations from classical behavior. These deviations can then be computed based on the classical motion. This approach is particularly important in the field of quantum chaos.
Mathematically equivalent formulations of quantum mechanics[edit]
There are numerous mathematically equivalent formulations of quantum mechanics. One of the oldest and most commonly used formulations is the "transformation theory" proposed by Paul Dirac, which unifies and generalizes the two earliest formulations of quantum mechanics – matrix mechanics (invented by Werner Heisenberg) and wave mechanics (invented by Erwin Schrödinger).[42]
Especially since Werner Heisenberg was awarded the Nobel Prize in Physics in 1932 for the creation of quantum mechanics, the role of Max Born in the development of QM was overlooked until the 1954 Nobel award. The role is noted in a 2005 biography of Born, which recounts his role in the matrix formulation of quantum mechanics, and the use of probability amplitudes. Heisenberg himself acknowledges having learned matrices from Born, as published in a 1940 festschrift honoring Max Planck.[43] In the matrix formulation, the instantaneous state of a quantum system encodes the probabilities of its measurable properties, or "observables". Examples of observables include energy, position, momentum, and angular momentum. Observables can be either continuous (e.g., the position of a particle) or discrete (e.g., the energy of an electron bound to a hydrogen atom).[44] An alternative formulation of quantum mechanics is Feynman's path integral formulation, in which a quantum-mechanical amplitude is considered as a sum over all possible classical and non-classical paths between the initial and final states. This is the quantum-mechanical counterpart of the action principle in classical mechanics.
Interactions with other scientific theories[edit]
The rules of quantum mechanics are fundamental. They assert that the state space of a system is a Hilbert space (crucially, that the space has an inner product) and that observables of the system are Hermitian operators acting on vectors in that space – although they do not tell us which Hilbert space or which operators. These can be chosen appropriately in order to obtain a quantitative description of a quantum system. An important guide for making these choices is the correspondence principle, which states that the predictions of quantum mechanics reduce to those of classical mechanics when a system moves to higher energies or, equivalently, larger quantum numbers, i.e. whereas a single particle exhibits a degree of randomness, in systems incorporating millions of particles averaging takes over and, at the high energy limit, the statistical probability of random behaviour approaches zero. In other words, classical mechanics is simply a quantum mechanics of large systems. This "high energy" limit is known as the classical or correspondence limit. One can even start from an established classical model of a particular system, then attempt to guess the underlying quantum model that would give rise to the classical model in the correspondence limit.
Unsolved problem in physics:
In the correspondence limit of quantum mechanics: Is there a preferred interpretation of quantum mechanics? How does the quantum description of reality, which includes elements such as the "superposition of states" and "wave function collapse", give rise to the reality we perceive?
(more unsolved problems in physics) |
When quantum mechanics was originally formulated, it was applied to models whose correspondence limit was non-relativistic classical mechanics. For instance, the well-known model of the quantum harmonic oscillator uses an explicitly non-relativistic expression for the kinetic energy of the oscillator, and is thus a quantum version of the classical harmonic oscillator.
Early attempts to merge quantum mechanics with special relativity involved the replacement of the Schrödinger equation with a covariant equation such as the Klein–Gordon equation or the Dirac equation. While these theories were successful in explaining many experimental results, they had certain unsatisfactory qualities stemming from their neglect of the relativistic creation and annihilation of particles. A fully relativistic quantum theory required the development of quantum field theory, which applies quantization to a field (rather than a fixed set of particles). The first complete quantum field theory, quantum electrodynamics, provides a fully quantum description of the electromagnetic interaction. The full apparatus of quantum field theory is often unnecessary for describing electrodynamic systems. A simpler approach, one that has been employed since the inception of quantum mechanics, is to treat charged particles as quantum mechanical objects being acted on by a classical electromagnetic field. For example, the elementary quantum model of the hydrogen atom describes the electric field of the hydrogen atom using a classical Coulomb potential. This "semi-classical" approach fails if quantum fluctuations in the electromagnetic field play an important role, such as in the emission of photons by charged particles.
Quantum field theories for the strong nuclear force and the weak nuclear force have also been developed. The quantum field theory of the strong nuclear force is called quantum chromodynamics, and describes the interactions of subnuclear particles such as quarks and gluons. The weak nuclear force and the electromagnetic force were unified, in their quantized forms, into a single quantum field theory (known as electroweak theory), by the physicists Abdus Salam, Sheldon Glashow and Steven Weinberg. These three men shared the Nobel Prize in Physics in 1979 for this work.[45]
It has proven difficult to construct quantum models of gravity, the remaining fundamental force. Semi-classical approximations are workable, and have led to predictions such as Hawking radiation. However, the formulation of a complete theory of quantum gravity is hindered by apparent incompatibilities between general relativity (the most accurate theory of gravity currently known) and some of the fundamental assumptions of quantum theory. The resolution of these incompatibilities is an area of active research, and theories such as string theory are among the possible candidates for a future theory of quantum gravity.
Classical mechanics has also been extended into the complex domain, with complex classical mechanics exhibiting behaviors similar to quantum mechanics.[46]
Quantum mechanics and classical physics[edit]
Predictions of quantum mechanics have been verified experimentally to an extremely high degree of accuracy.[47] According to the correspondence principle between classical and quantum mechanics, all objects obey the laws of quantum mechanics, and classical mechanics is just an approximation for large systems of objects (or a statistical quantum mechanics of a large collection of particles).[48] The laws of classical mechanics thus follow from the laws of quantum mechanics as a statistical average at the limit of large systems or large quantum numbers.[49] However, chaotic systems do not have good quantum numbers, and quantum chaos studies the relationship between classical and quantum descriptions in these systems.
Quantum coherence is an essential difference between classical and quantum theories as illustrated by the Einstein–Podolsky–Rosen (EPR) paradox – an attack on a certain philosophical interpretation of quantum mechanics by an appeal to local realism.[50] Quantum interference involves adding together probability amplitudes, whereas classical "waves" infer that there is an adding together of intensities. For microscopic bodies, the extension of the system is much smaller than the coherence length, which gives rise to long-range entanglement and other nonlocal phenomena characteristic of quantum systems.[51] Quantum coherence is not typically evident at macroscopic scales, though an exception to this rule may occur at extremely low temperatures (i.e. approaching absolute zero) at which quantum behavior may manifest itself macroscopically.[52] This is in accordance with the following observations:
- Many macroscopic properties of a classical system are a direct consequence of the quantum behavior of its parts. For example, the stability of bulk matter (consisting of atoms and molecules which would quickly collapse under electric forces alone), the rigidity of solids, and the mechanical, thermal, chemical, optical and magnetic properties of matter are all results of the interaction of electric charges under the rules of quantum mechanics.[53]
- While the seemingly "exotic" behavior of matter posited by quantum mechanics and relativity theory become more apparent when dealing with particles of extremely small size or velocities approaching the speed of light, the laws of classical, often considered "Newtonian", physics remain accurate in predicting the behavior of the vast majority of "large" objects (on the order of the size of large molecules or bigger) at velocities much smaller than the velocity of light.[54]
Copenhagen interpretation of quantum versus classical kinematics[edit]
A big difference between classical and quantum mechanics is that they use very different kinematic descriptions.[55]
In Niels Bohr's mature view, quantum mechanical phenomena are required to be experiments, with complete descriptions of all the devices for the system, preparative, intermediary, and finally measuring. The descriptions are in macroscopic terms, expressed in ordinary language, supplemented with the concepts of classical mechanics.[56][57][58][59] The initial condition and the final condition of the system are respectively described by values in a configuration space, for example a position space, or some equivalent space such as a momentum space. Quantum mechanics does not admit a completely precise description, in terms of both position and momentum, of an initial condition or "state" (in the classical sense of the word) that would support a precisely deterministic and causal prediction of a final condition.[60][61] In this sense, advocated by Bohr in his mature writings, a quantum phenomenon is a process, a passage from initial to final condition, not an instantaneous "state" in the classical sense of that word.[62][63] Thus there are two kinds of processes in quantum mechanics: stationary and transitional. For a stationary process, the initial and final condition are the same. For a transition, they are different. Obviously by definition, if only the initial condition is given, the process is not determined.[60] Given its initial condition, prediction of its final condition is possible, causally but only probabilistically, because the Schrödinger equation is deterministic for wave function evolution, but the wave function describes the system only probabilistically.[64][65]
For many experiments, it is possible to think of the initial and final conditions of the system as being a particle. In some cases it appears that there are potentially several spatially distinct pathways or trajectories by which a particle might pass from initial to final condition. It is an important feature of the quantum kinematic description that it does not permit a unique definite statement of which of those pathways is actually followed. Only the initial and final conditions are definite, and, as stated in the foregoing paragraph, they are defined only as precisely as allowed by the configuration space description or its equivalent. In every case for which a quantum kinematic description is needed, there is always a compelling reason for this restriction of kinematic precision. An example of such a reason is that for a particle to be experimentally found in a definite position, it must be held motionless; for it to be experimentally found to have a definite momentum, it must have free motion; these two are logically incompatible.[66][67]
Classical kinematics does not primarily demand experimental description of its phenomena. It allows completely precise description of an instantaneous state by a value in phase space, the Cartesian product of configuration and momentum spaces. This description simply assumes or imagines a state as a physically existing entity without concern about its experimental measurability. Such a description of an initial condition, together with Newton's laws of motion, allows a precise deterministic and causal prediction of a final condition, with a definite trajectory of passage. Hamiltonian dynamics can be used for this. Classical kinematics also allows the description of a process analogous to the initial and final condition description used by quantum mechanics. Lagrangian mechanics applies to this.[68] For processes that need account to be taken of actions of a small number of Planck constants, classical kinematics is not adequate; quantum mechanics is needed.
General relativity and quantum mechanics[edit]
Even with the defining postulates of both Einstein's theory of general relativity and quantum theory being indisputably supported by rigorous and repeated empirical evidence, and while they do not directly contradict each other theoretically (at least with regard to their primary claims), they have proven extremely difficult to incorporate into one consistent, cohesive model.[69]
Gravity is negligible in many areas of particle physics, so that unification between general relativity and quantum mechanics is not an urgent issue in those particular applications. However, the lack of a correct theory of quantum gravity is an important issue in physical cosmology and the search by physicists for an elegant "Theory of Everything" (TOE). Consequently, resolving the inconsistencies between both theories has been a major goal of 20th- and 21st-century physics. Many prominent physicists, including Stephen Hawking, have labored for many years in the attempt to discover a theory underlying everything. This TOE would combine not only the different models of subatomic physics, but also derive the four fundamental forces of nature – the strong force, electromagnetism, the weak force, and gravity – from a single force or phenomenon. While Stephen Hawking was initially a believer in the Theory of Everything, after considering Gödel's Incompleteness Theorem, he has concluded that one is not obtainable, and has stated so publicly in his lecture "Gödel and the End of Physics" (2002).[70]
Attempts at a unified field theory[edit]
The quest to unify the fundamental forces through quantum mechanics is still ongoing. Quantum electrodynamics (or "quantum electromagnetism"), which is currently (in the perturbative regime at least) the most accurately tested physical theory in competition with general relativity,[71][72] has been successfully merged with the weak nuclear force into the electroweak force and work is currently being done to merge the electroweak and strong force into the electrostrong force. Current predictions state that at around 1014 GeV the three aforementioned forces are fused into a single unified field.[73] Beyond this "grand unification", it is speculated that it may be possible to merge gravity with the other three gauge symmetries, expected to occur at roughly 1019 GeV. However – and while special relativity is parsimoniously incorporated into quantum electrodynamics – the expanded general relativity, currently the best theory describing the gravitation force, has not been fully incorporated into quantum theory. One of those searching for a coherent TOE is Edward Witten, a theoretical physicist who formulated the M-theory, which is an attempt at describing the supersymmetrical based string theory. M-theory posits that our apparent 4-dimensional spacetime is, in reality, actually an 11-dimensional spacetime containing 10 spatial dimensions and 1 time dimension, although 7 of the spatial dimensions are – at lower energies – completely "compactified" (or infinitely curved) and not readily amenable to measurement or probing.
Another popular theory is Loop quantum gravity (LQG), a theory first proposed by Carlo Rovelli that describes the quantum properties of gravity. It is also a theory of quantum space and quantum time, because in general relativity the geometry of spacetime is a manifestation of gravity. LQG is an attempt to merge and adapt standard quantum mechanics and standard general relativity. The main output of the theory is a physical picture of space where space is granular. The granularity is a direct consequence of the quantization. It has the same nature of the granularity of the photons in the quantum theory of electromagnetism or the discrete levels of the energy of the atoms. But here it is space itself which is discrete. More precisely, space can be viewed as an extremely fine fabric or network "woven" of finite loops. These networks of loops are called spin networks. The evolution of a spin network over time is called a spin foam. The predicted size of this structure is the Planck length, which is approximately 1.616×10−35 m. According to theory, there is no meaning to length shorter than this (cf. Planck scale energy). Therefore, LQG predicts that not just matter, but also space itself, has an atomic structure.
Philosophical implications[edit]
Since its inception, the many counter-intuitive aspects and results of quantum mechanics have provoked strong philosophical debates and many interpretations. Even fundamental issues, such as Max Born's basic rules concerning probability amplitudes and probability distributions, took decades to be appreciated by society and many leading scientists. Richard Feynman once said, "I think I can safely say that nobody understands quantum mechanics."[74] According to Steven Weinberg, "There is now in my opinion no entirely satisfactory interpretation of quantum mechanics."[75]
The Copenhagen interpretation – due largely to Niels Bohr and Werner Heisenberg – remains most widely accepted amongst physicists, some 75 years after its enunciation. According to this interpretation, the probabilistic nature of quantum mechanics is not a temporary feature which will eventually be replaced by a deterministic theory, but instead must be considered a final renunciation of the classical idea of "causality." It is also believed therein that any well-defined application of the quantum mechanical formalism must always make reference to the experimental arrangement, due to the conjugate nature of evidence obtained under different experimental situations.
Albert Einstein, himself one of the founders of quantum theory, did not accept some of the more philosophical or metaphysical interpretations of quantum mechanics, such as rejection of determinism and of causality. He is famously quoted as saying, in response to this aspect, "God does not play with dice".[76] He rejected the concept that the state of a physical system depends on the experimental arrangement for its measurement. He held that a state of nature occurs in its own right, regardless of whether or how it might be observed. In that view, he is supported by the currently accepted definition of a quantum state, which remains invariant under arbitrary choice of configuration space for its representation, that is to say, manner of observation. He also held that underlying quantum mechanics there should be a theory that thoroughly and directly expresses the rule against action at a distance; in other words, he insisted on the principle of locality. He considered, but rejected on theoretical grounds, a particular proposal for hidden variables to obviate the indeterminism or acausality of quantum mechanical measurement. He considered that quantum mechanics was a currently valid but not a permanently definitive theory for quantum phenomena. He thought its future replacement would require profound conceptual advances, and would not come quickly or easily. The Bohr-Einstein debates provide a vibrant critique of the Copenhagen Interpretation from an epistemological point of view. In arguing for his views, he produced a series of objections, the most famous of which has become known as the Einstein–Podolsky–Rosen paradox.
John Bell showed that this EPR paradox led to experimentally testable differences between quantum mechanics and theories that rely on added hidden variables. Experiments have been performed confirming the accuracy of quantum mechanics, thereby demonstrating that quantum mechanics cannot be improved upon by addition of hidden variables.[77] Alain Aspect's initial experiments in 1982, and many subsequent experiments since, have definitively verified quantum entanglement. By the early 1980s, experiments had shown that such inequalities were indeed violated in practice – so that there were in fact correlations of the kind suggested by quantum mechanics. At first these just seemed like isolated esoteric effects, but by the mid-1990s, they were being codified in the field of quantum information theory, and led to constructions with names like quantum cryptography and quantum teleportation.[78]
Entanglement, as demonstrated in Bell-type experiments, does not, however, violate causality, since no transfer of information happens. Quantum entanglement forms the basis of quantum cryptography, which is proposed for use in high-security commercial applications in banking and government.
The Everett many-worlds interpretation, formulated in 1956, holds that all the possibilities described by quantum theory simultaneously occur in a multiverse composed of mostly independent parallel universes.[79] This is not accomplished by introducing some "new axiom" to quantum mechanics, but on the contrary, by removing the axiom of the collapse of the wave packet. All of the possible consistent states of the measured system and the measuring apparatus (including the observer) are present in a real physical – not just formally mathematical, as in other interpretations – quantum superposition. Such a superposition of consistent state combinations of different systems is called an entangled state. While the multiverse is deterministic, we perceive non-deterministic behavior governed by probabilities, because we can only observe the universe (i.e., the consistent state contribution to the aforementioned superposition) that we, as observers, inhabit. Everett's interpretation is perfectly consistent with John Bell's experiments and makes them intuitively understandable. However, according to the theory of quantum decoherence, these "parallel universes" will never be accessible to us. The inaccessibility can be understood as follows: once a measurement is done, the measured system becomes entangled with both the physicist who measured it and a huge number of other particles, some of which are photons flying away at the speed of light towards the other end of the universe. In order to prove that the wave function did not collapse, one would have to bring all these particles back and measure them again, together with the system that was originally measured. Not only is this completely impractical, but even if one could theoretically do this, it would have to destroy any evidence that the original measurement took place (including the physicist's memory). In light of these Bell tests, Cramer (1986) formulated his transactional interpretation[80] which is unique in providing a physical explanation for the Born rule.[81] Relational quantum mechanics appeared in the late 1990s as the modern derivative of the Copenhagen Interpretation.
Applications[edit]
Quantum mechanics has had enormous[82] success in explaining many of the features of our universe. Quantum mechanics is often the only theory that can reveal the individual behaviors of the subatomic particles that make up all forms of matter (electrons, protons, neutrons, photons, and others). Quantum mechanics has strongly influenced string theories, candidates for a Theory of Everything (see reductionism).
Quantum mechanics is also critically important for understanding how individual atoms are joined by covalent bond to form molecules. The application of quantum mechanics to chemistry is known as quantum chemistry. Quantum mechanics can also provide quantitative insight into ionic and covalent bonding processes by explicitly showing which molecules are energetically favorable to which others and the magnitudes of the energies involved.[83] Furthermore, most of the calculations performed in modern computational chemistry rely on quantum mechanics.
In many aspects modern technology operates at a scale where quantum effects are significant. Important applications of quantum theory include quantum chemistry, quantum optics, quantum computing, superconducting magnets, light-emitting diodes, the optical amplifier and the laser, the transistor and semiconductors such as the microprocessor, medical and research imaging such as magnetic resonance imaging and electron microscopy.[84] Explanations for many biological and physical phenomena are rooted in the nature of the chemical bond, most notably the macro-molecule DNA.[85]
Electronics[edit]
Many modern electronic devices are designed using quantum mechanics. Examples include the laser, the transistor (and thus the microchip), the electron microscope, and magnetic resonance imaging (MRI). The study of semiconductors led to the invention of the diode and the transistor, which are indispensable parts of modern electronics systems, computer and telecommunication devices. Another application is for making laser diode and light emitting diode which are a high-efficiency source of light.
Many electronic devices operate under effect of quantum tunneling. It even exists in the simple light switch. The switch would not work if electrons could not quantum tunnel through the layer of oxidation on the metal contact surfaces. Flash memory chips found in USB drives use quantum tunneling to erase their memory cells. Some negative differential resistance devices also utilize quantum tunneling effect, such as resonant tunneling diode. Unlike classical diodes, its current is carried by resonant tunneling through two or more potential barriers (see right figure). Its negative resistance behavior can only be understood with quantum mechanics: As the confined state moves close to Fermi level, tunnel current increases. As it moves away, current decreases. Quantum mechanics is necessary to understanding and designing such electronic devices.
Cryptography[edit]
Researchers are currently seeking robust methods of directly manipulating quantum states. Efforts are being made to more fully develop quantum cryptography, which will theoretically allow guaranteed secure transmission of information.
An inherent advantage yielded by quantum cryptography when compared to classical cryptography is the detection of passive eavesdropping. This is a natural result of the behavior of quantum bits; due to the observer effect, if a bit in a superposition state were to be observed, the superposition state would collapse into an eigenstate. Because the intended recipient was expecting to receive the bit in a superposition state, the intended recipient would know there was an attack, because the bit's state would no longer be in a superposition.[86]
Quantum computing[edit]
Another goal is the development of quantum computers, which are expected to perform certain computational tasks exponentially faster than classical computers. Instead of using classical bits, quantum computers use qubits, which can be in superpositions of states. Quantum programmers are able to manipulate the superposition of qubits in order to solve problems that classical computing cannot do effectively, such as searching unsorted databases or integer factorization. IBM claims that the advent of quantum computing may progress the fields of medicine, logistics, financial services, artificial intelligence and cloud security.[87]
Another active research topic is quantum teleportation, which deals with techniques to transmit quantum information over arbitrary distances.
Macroscale quantum effects[edit]
While quantum mechanics primarily applies to the smaller atomic regimes of matter and energy, some systems exhibit quantum mechanical effects on a large scale. Superfluidity, the frictionless flow of a liquid at temperatures near absolute zero, is one well-known example. So is the closely related phenomenon of superconductivity, the frictionless flow of an electron gas in a conducting material (an electric current) at sufficiently low temperatures. The fractional quantum Hall effect is a topological ordered state which corresponds to patterns of long-range quantum entanglement.[88] States with different topological orders (or different patterns of long range entanglements) cannot change into each other without a phase transition.
Quantum theory[edit]
Quantum theory also provides accurate descriptions for many previously unexplained phenomena, such as black-body radiation and the stability of the orbitals of electrons in atoms. It has also given insight into the workings of many different biological systems, including smell receptors and protein structures.[89] Recent work on photosynthesis has provided evidence that quantum correlations play an essential role in this fundamental process of plants and many other organisms.[90] Even so, classical physics can often provide good approximations to results otherwise obtained by quantum physics, typically in circumstances with large numbers of particles or large quantum numbers. Since classical formulas are much simpler and easier to compute than quantum formulas, classical approximations are used and preferred when the system is large enough to render the effects of quantum mechanics insignificant.
Examples[edit]
Free particle[edit]
For example, consider a free particle. In quantum mechanics, a free matter is described by a wave function. The particle properties of the matter become apparent when we measure its position and velocity. The wave properties of the matter become apparent when we measure its wave properties like interference. The wave–particle duality feature is incorporated in the relations of coordinates and operators in the formulation of quantum mechanics. Since the matter is free (not subject to any interactions), its quantum state can be represented as a wave of arbitrary shape and extending over space as a wave function. The position and momentum of the particle are observables. The Uncertainty Principle states that both the position and the momentum cannot simultaneously be measured with complete precision. However, one can measure the position (alone) of a moving free particle, creating an eigenstate of position with a wave function that is very large (a Dirac delta) at a particular position x, and zero everywhere else. If one performs a position measurement on such a wave function, the resultant x will be obtained with 100% probability (i.e., with full certainty, or complete precision). This is called an eigenstate of position – or, stated in mathematical terms, a generalized position eigenstate (eigendistribution). If the particle is in an eigenstate of position, then its momentum is completely unknown. On the other hand, if the particle is in an eigenstate of momentum, then its position is completely unknown.[91] In an eigenstate of momentum having a plane wave form, it can be shown that the wavelength is equal to h/p, where h is Planck's constant and p is the momentum of the eigenstate.[92]
Particle in a box[edit]
The particle in a one-dimensional potential energy box is the most mathematically simple example where restraints lead to the quantization of energy levels. The box is defined as having zero potential energy everywhere inside a certain region, and therefore infinite potential energy everywhere outside that region. For the one-dimensional case in the direction, the time-independent Schrödinger equation may be written[93]
With the differential operator defined by
the previous equation is evocative of the classic kinetic energy analogue,
with state in this case having energy coincident with the kinetic energy of the particle.
The general solutions of the Schrödinger equation for the particle in a box are
or, from Euler's formula,
The infinite potential walls of the box determine the values of and at and where must be zero. Thus, at ,
and . At ,
in which cannot be zero as this would conflict with the Born interpretation. Therefore, since , must be an integer multiple of ,
The quantization of energy levels follows from this constraint on since
The ground state energy of the particles is for
The energy of the particle in the th state is
Particle in a box with boundary condition
In this condition the general solution will be same, there will little change to the final result, since the boundary conditions are changed only slightly:
At the wave function is not actually zero at all values of
Clearly, from the wave function variation graph we have, At the wave function follows a cosine curve with as the origin.
At the wave function follows a sine curve with as the origin.
From this observation we can conclude that the wave function is alternatively sine and cosine. So in this case the resultant wave equation is
Finite potential well[edit]
A finite potential well is the generalization of the infinite potential well problem to potential wells having finite depth.
The finite potential well problem is mathematically more complicated than the infinite particle-in-a-box problem as the wave function is not pinned to zero at the walls of the well. Instead, the wave function must satisfy more complicated mathematical boundary conditions as it is nonzero in regions outside the well.
Rectangular potential barrier[edit]
This is a model for the quantum tunneling effect which plays an important role in the performance of modern technologies such as flash memory and scanning tunneling microscopy. Quantum tunneling is central to physical phenomena involved in superlattices.
Harmonic oscillator[edit]
As in the classical case, the potential for the quantum harmonic oscillator is given by
This problem can either be treated by directly solving the Schrödinger equation, which is not trivial, or by using the more elegant "ladder method" first proposed by Paul Dirac. The eigenstates are given by
where Hn are the Hermite polynomials
and the corresponding energy levels are
This is another example illustrating the quantification of energy for bound states.
Step potential[edit]
The potential in this case is given by:
The solutions are superpositions of left- and right-moving waves:
and
- ,
with coefficients A and B determined from the boundary conditions and by imposing a continuous derivative on the solution, and where the wave vectors are related to the energy via
and
- .
Each term of the solution can be interpreted as an incident, reflected, or transmitted component of the wave, allowing the calculation of transmission and reflection coefficients. Notably, in contrast to classical mechanics, incident particles with energies greater than the potential step are partially reflected.
The Genesis and Present State of Development of the Quantum Theory
If I take it correctly that the duty imposed upon me today is to give a public lecture on my writings, then I believe that this task, the importance of which I am well aware through the gratitude felt towards the noble-minded founder of our Foundation, cannot be more suitably fulfilled than by my trying to give you the story of the origin of the quantum theory in broad outlines and to couple with this, a picture in a small frame, of the development of this theory up to now, and its present-day significance for physics.
When I look back to the time, already twenty years ago, when the concept and magnitude of the physical quantum of action began, for the first time, to unfold from the mass of experimental facts, and again, to the long and ever tortuous path which led, finally, to its disclosure, the whole development seems to me to provide a fresh illustration of the long-since proved saying of Goethe’s that man errs as long as he strives. And the whole strenuous intellectual work of an industrious research worker would appear, after all, in vain and hopeless, if he were not occasionally through some striking facts to find that he had, at the end of all his criss-cross journeys, at last accomplished at least one step which was conclusively nearer the truth. An indispensable hypothesis, even though still far from being a guarantee of success, is however the pursuit of a specific aim, whose lighted beacon, even by initial failures, is not betrayed.
For many years, such an aim for me was to find the solution to the problem of the distribution of energy in the normal spectrum of radiating heat. Since Gustav Kirchhoff has shown that the state of the heat radiation which takes place in a cavity bounded by any emitting and absorbing substances of uniform temperature is entirely independent upon the nature of the substances, a universal function was demonstrated which was dependent only upon temperature and wavelength, but in no way upon the properties of any substance. And the discovery of this remarkable function promised deeper insight into the connection between energy and temperature which is, in fact, the major problem in thermodynamics and thus in the whole of molecular physics. To attain this there was no other way but to seek out from all the different substances existing in Nature one of known emissive and absorptive power, and to calculate the properties of the heat radiation in stationary energy exchange with it. According to Kirchhoff’s Law, this would have to prove independent of the nature of the body.
Heinrich Hertz’s linear oscillator, whose laws of emission, for a given frequency, Hertz had just previously completely developed, seemed to me to be a particularly suitable device for this purpose. If a number of such Hertzian oscillators are set up within a cavity surrounded by a sphere of reflecting walls, then by analogy with audio oscillators and resonators, energy will be exchanged between them by the output and absorption of electromagnetic waves, and finally stationary radiation corresponding to Kirchhoff’s Law, the so-called black-body radiation, should be set up within the cavity. I was filled at that time with what would be thought today naively charming and agreeable expectations, that the laws of classical electrodynamics would, if approached in a sufficiently general manner with the avoidance of special hypotheses, be sufficient to enable us to grasp the most significant part of the process to be expected, and thus to achieve the desired aim. I, therefore, developed first the laws of emission and absorption of a linear resonator on the most general basis, in fact I proceeded on such a detour which could well have been avoided had I made use of the existing electron theory of H.A. Lorentz, already basically complete. But since I did not quite trust the electron hypothesis, I preferred to observe that energy which flowed in and out through an enclosing spherical surface around the resonator at a suitable distance from it. By this method, only processes in a pure vacuum came into account, but a knowledge of these was sufficient to draw the necessary conclusions however, about the energy changes in the resonator.
The fruit of this long series of investigations, of which some, by comparison with existing observations, mainly the vapour measurements by V. Bjerknes, were susceptible to checking, and were thereby confirmed, was the establishment of the general connection between the energy of a resonator of specific natural period of vibration and the energy radiation of the corresponding spectral region in the surrounding field under conditions of stationary energy exchange. The noteworthy result was found that this connection was in no way dependent upon the nature of the resonator, particularly its attenuation constants – a circumstance which I welcomed happily since the whole problem thus became simpler, for instead of the energy of radiation, the energy of the resonator could be taken and, thereby, a complex system, composed of many degrees of freedom, could be replaced by a simple system of one degree of freedom.
Nevertheless, the result meant no more than a preparatory step towards the initial onslaught on the particular problem which now towered with all its fearsome height even steeper before me. The first attempt upon it went wrong, for my original secret hope that the radiation emitted from the resonator can be in some characteristic way or other distinguished from the absorbed radiation and thereby allow a differential equation to be set up, from the integration of which one could gain some special condition for the properties of stationary radiation, proved false. The resonator reacted only to those rays which it also emitted, and was not in the slightest bit sensitive to the adjacent spectral regions.
Furthermore, my hypothesis that the resonator could exercise a unilateral, i.e. irreversible, effect upon the energy in the surrounding radiation field, was strongly contested by Ludwig Boltzmann, who, with his riper experience in these problems, proved that according to the laws of classical dynamics each of the processes observed by me can proceed in exactly the opposite direction, in such a way, that a spherical wave emitted from the resonator, returns and contracts in steadily diminishing concentric spherical surfaces inwards to the resonator, and is again absorbed by it, thereby allowing the formerly absorbed energy to be re-transmitted into space in the direction from which it came. And when I excluded this kind of singular process, such as an inwardly directed wave, by means of the introduction of a limiting definition, the hypothesis of natural radiation, all these analyses still showed ever more clearly that an important connecting element or term, essential for the complete grasp of the core of the problem, must be missing.
So there was nothing left for me but to tackle the problem from the opposite side, that of thermodynamics, in which field I felt, moreover, more confident. In fact my earlier studies of the Second Law of Heat Theory stood me in good stead, so that from the start I tried to get a connection, not between the temperature but rather the entropy of the resonator and its energy, and in fact, not its entropy exactly but the second derivative with respect to the energy since this has a direct physical meaning for the irreversibility of the energy exchange between resonator and radiation. Since I was, however, at that time still too far oriented towards the phenomenological aspect to come to closer quarters with the connection between entropy and probability, I saw myself, at first, relying solely upon the existing results of experience. In the foreground of interest at that time, in 1899, was the energy distribution law established by W. Wien shortly before, whose experimental proof was taken up, on the one hand, by F. Paschen at the Technische Hochschule in Hannover, and, on the other hand, by O. Lummer and E. Pringsheim at the State Institution in Charlottenburg. This law brought out the dependence of the radiation intensity on the temperature, representing it by an exponential function. If one calculates the connection between the entropy and the energy of a resonator, determined by the above law, the remarkable result is obtained that the reciprocal value of the above-mentioned differential coefficient, which I will call R, is proportional to the energy. This extremely simple relationship can be considered as the completely adequate expression of Wien’s energy distribution law; for with the dependence upon the energy, the dependence upon the wavelength is always directly given through the general, well-established displacement law by Wien.
Since the whole problem concerned a universal law of Nature, and since at that time, as still today, I held the unshakeable opinion that the simpler the presentation of a particular law of Nature, the more general it is – though at the same time, which formula to take as the simpler, is a problem which cannot always be confidently and finally decided – I believed for a long time that the law that the quantity R is proportional to the energy, should be looked upon as the basis for the whole energy distribution law. This concept could not be maintained for long in the face of fresh measurements. Whilst for small values of the energy and for short waves, Wien’s law was satisfactorily confirmed, noteworthy deviations for larger wavelengths were found, first by O. Lummer and E. Pringsheim, and finally by H. Rubens and F. Kurlbaum, whose measurements on the infrared residual rays of fluorite and rock salt revealed a totally different, though still extremely simple relationship, characterized by the fact that the quantity R is not proportional to the energy, but to the square of the energy, and in fact this holds with increasing accuracy for greater energies and wavelengths.
So, through direct experiment, two simple limits were determined for the function R: for small energies, proportionality with the energy; for greater energies, proportionality with the square of the energy. There was no better alternative but to make, for the general case, the quantity R equal to the sum of two terms, one of the first power, and one of the second power of the energy, so that for small energies the first is predominant, whilst for the greater energies the second is dominant. Thus the new radiation formula was found, which, in the face of its experimental proof, has stood firm to a reasonable extent until now. Even today, admittedly, we cannot talk of final exact confirmation. In fact, a fresh attempt at proof is urgently required.
However, even if the radiation formula should prove itself to be absolutely accurate, it would still only have, within the significance of a happily chosen interpolation formula, a strictly limited value. For this reason, I busied myself, from then on, that is, from the day of its establishment, with the task of elucidating a true physical character for the formula, and this problem led me automatically to a consideration of the connection between entropy and probability, that is, Boltzmann’s trend of ideas; until after some weeks of the most strenuous work of my life, light came into the darkness, and a new undreamed-of perspective opened up before me.
I must make a small intercalation at this point. According to Boltzmann, entropy is a measure for physical probability, and the nature and essence of the Second Law of Heat Theory is that in Nature a state occurs more frequently, the more probable it is. Now one always measures in Nature the difference in entropies, never the entropy itself, and to this extent one cannot speak of the absolute entropy of a state, without a certain arbitrariness. Nevertheless, it is useful to introduce the suitably defined absolute value of entropy, namely for the reason that with its help certain general laws can be particularly easily formulated. The case seems to be parallel, as I see it, with that of energy. Energy itself cannot be measured, only its difference. For that reason one used to deal, not with energy, but with work, and even Ernst Mach, who had so much to do with the Law of Conservation of Energy, and who in principle kept away from all speculations beyond the field of observation, has always avoided speaking of energy itself. Likewise, in thermochemistry, one has always stuck to the thermal effect, that is, to energy differences, until Wilhelm Ostwald in particular emphatically showed that many detailed considerations could be significantly abbreviated if one dealt with energy itself instead of with calorimetric numbers. The additive constant which was at first still undetermined in the expression for energy, has later been finally determined through the relativistic law of the proportionality between energy and inertia.
In a similar way to that for energy, an absolute value can be defined also for entropy and, as a result thereof, for the physical probability too, e.g. by so fixing the additive constant that energy and entropy disappear together. On the basis of a consideration of this kind a specific, relatively simple combinatorial method was obtained for the calculation of the physical probability of a specified energy distribution in a system of resonators, which led exactly to that entropy expression determined by the radiation law, and it brought me much-valued satisfaction for the many disappointments when Ludwig Boltzmann, in the letter returning my essay, expressed his interest and basic agreement with the train of thoughts expounded in it.
For the numerical treatment of the indicated consideration of probability, knowledge of two universal constants is required, both of which have an independent physical meaning, and whose subsequent evaluation from the law of radiation must provide proof as to whether the whole method is to be looked upon as a mere artifice for calculation, or whether it has an inherent real physical sense and interpretation. The first constant is of a more formal nature and is connected with the definition of temperature. If temperature were to be defined as the average kinetic energy of a molecule in an ideal gas, that is, as a tiny, little quantity, then the constant would have the value 2/3. In conventional temperature measure, on the contrary, the constant has an extremely small value which stands, naturally, in close association with the energy of a single molecule, and an exact knowledge of which leads, therefore, to the calculation of the mass of a molecule and those parameters related to it. This constant is often referred to as Boltzmann’s constant, although, to my knowledge, Boltzmann himself never introduced it – a peculiar state of affairs, which can be explained by the fact that Boltzmann, as appears from his occasional utterances, never gave thought to the possibility of carrying out an exact measurement of the constant. Nothing can better illustrate the positive and hectic pace of progress which the art of experimenters has made over the past twenty years, than the fact that since that time, not only one, but a great number of methods have been discovered for measuring the mass of a molecule with practically the same accuracy as that attained for a planet.
At the time when I carried out the corresponding calculation from the radiation law, an exact proof of the number obtained was quite impossible, and not much more could be done than to determine the order of magnitude which was admissible. It was shortly afterward that E. Rutherford and H. Geiger succeeded in determining, by direct counting of the alpha particles, the value of the electrical elementary charge, which they found to be 4.65 x 10-10 electrostatic units; and the agreement of this figure with the number calculated by me, 4.69 x 10-10, could be taken as decisive confirmation of the usefulness of my theory. Since then, more sophisticated methods have led to a slightly higher value, these measurements being carried out by E. Regener, R.A. Millikan, and others.
The explanation of the second universal constant of the radiation law was not so easy. Because it represents the product of energy and time (according to the first calculation it was 6.55 x 10-27 erg sec), I described it as the elementary quantum of action. Whilst it was completely indispensable for obtaining the correct expression for entropy – since only with its help could the magnitude of the “elementary regions” or “free rooms for action” of the probability, decisive for the assigned probability consideration, be determined – it proved elusive and resistant to all efforts to fit it into the framework of classical theory. As long as it was looked upon as infinitely small, that is, for large energies or long periods of time, everything went well; but in the general case, however, a gap yawned open in some place or other, which was the more striking, the weaker and faster the vibrations that were considered. The foundering of all efforts to bridge the chasm soon left little doubt. Either the quantum of action was a fictional quantity, then the whole deduction of the radiation law was in the main illusory and represented nothing more than an empty non-significant play on formulae, or the derivation of the radiation law was based on a sound physical conception. In this case the quantum of action must play a fundamental role in physics, and here was something entirely new, never before heard of, which seemed called upon to basically revise all our physical thinking, built as this was, since the establishment of the infinitesimal calculus by Leibniz and Newton, upon the acceptance of the continuity of all causative connections.
Experiment has decided for the second alternative. That the decision could be made so soon and so definitely was due not to the proving of the energy distribution law of heat radiation, still less to the special derivation of that law devised by me, but rather should it be attributed to the restless forwardthrusting work of those research workers who used the quantum of action to help them in their own investigations and experiments. The first impact in this field was made by A. Einstein who, on the one hand, pointed out that the introduction of the energy quanta, determined by the quantum of action, appeared suitable for obtaining a simple explanation for a series of noteworthy observations during the action of light, such as Stokes’ Law, electron emission, and gas ionization, and, on the other hand, derived a formula for the specific heat of a solid body through the identification of the expression for the energy of a system of resonators with that of the energy of a solid body, and this formula expresses, more or less correctly, the changes in specific heat, particularly its reduction with falling temperature. The result was the emergence, in all directions, of a number of problems whose more accurate and extensive elaboration in the course of time brought to light a mass of valuable material. I cannot give here even an approximate report on the abundance of the work carried out. Only the most important and characteristic steps along the path of progressive knowledge can be high-lighted here.
First come thermal and chemical processes. As far as the specific heat of solid bodies is concerned, Einstein’s theory, which rested upon the assumption of a single natural vibration of the atom, was extended by M. Born and Th. von Kármán to the case of various kinds of natural vibrations, which approached more nearly to the truth. P. Debye succeeded, by means of a bold simplification of the stipulations for the character of natural vibrations, in producing a relatively simple formula for the specific heat of solid bodies which, particularly for low temperatures, not only satisfactorily reproduces the measurements obtained by W. Nernst and his pupils, but is also compatible with the elastic and optical properties of these substances. The quantum of action also comes to the fore in considering the specific heat of gases. W. Nernst had earlier suggested that to the quantum of energy of a vibration there must also correspond a quantum of energy of a rotation, and accordingly it was to be expected that the rotational energy of the gas molecules would disappear with falling temperature. The measurements by A. Eucken on the specific heat of hydrogen confirmed this conclusion, and if the calculations of A. Einstein and O. Stern, P. Ehrenfest and others have not until now afforded any completely satisfactory agreement, this lies understandably in our, as yet, incomplete knowledge of the model of a hydrogen molecule. The fact that the rotations of the gas molecules, as specified by quantum conditions, do really exist in Nature, can no longer be doubted in view of the work on absorption bands in the infrared by N. Bjerrum, E. von Bahr, H. Rubens, G. Hetmer and others, even though it has not been possible to give an all-round exhaustive explanation of this remarkable rotation spectra up to now.
Since, ultimately, all affinity properties of a substance are determined by its entropy, the quantum-theoretical calculation of the entropy opens up the way to all the problems of chemical relationships. The Nemst chemical constant, which O. Sackur calculated directly through a combinatorial method as applied to oscillators, is characteristic for the absolute value of the entropy of a gas. H. Tetrode, in close association with the data to be obtained by measurement, determined the difference in entropy values between vapour and solid state by studying an evaporation process.
Whilst in the cases so far considered, states of thermodynamic equilibrium are concerned, for which therefore the measurements can only yield statistically average values relating to many particles and lengthy periods of time, the observation of electron impacts leads directly to the dynamic details of the process under examination. Thus the determination of the so-called resonance potential carried out by J. Franck and G. Hertz, or that concerning the critical velocity is the minimum an electron must possess in order to cause emission of a light quantum or photon by impact with a neutral atom, supplied a method of measuring the quantum of action which was as direct as could be wished for. The experiments by D.L. Webster and E. Wagner and others resulted in the development of methods suitable for the Röntgen spectrum which also gave completely compatible results.
The production of photons by electron impact appears as the reverse process to that of electron emission through irradiation by light-, Röntgen-, or gamma-rays and again here, the energy quanta, determined by the quantum of action and by the vibration frequency, play a characteristic role, as could be recognized, already at an early time, from the striking fact that the velocity of the emitted electrons is not determined by the intensity of radiation, but only by the colour of the light incident upon the substance. Also from the quantitative aspect, Einstein’s equations with respect to the light quantum have proved true in every way, as established by R.A. Millikan, in particular, by measurements of the escape velocity of emitted electrons, whilst the significance of the photon for the initiation of photochemical reactions was discovered by E. Warburg.
If the various experiments and experiences gathered together by me up to now, from the different fields of physics, provide impressive proof in favour of the existence of the quantum of action, the quantum hypothesis has, nevertheless, its greatest support from the establishment and development of the atom theory by Niels Bohr. For it fell to this theory to discover, in the quantum of action, the long-sought key to the entrance gate into the wonderland of spectroscopy, which since the discovery of spectral analysis had obstinately defied all efforts to breach it. And now that the way was opened, a sudden flood of new-won knowledge poured out over the whole field including the neighbouring fields in physics and chemistry. The first brilliant acquisition was the derivation of Balmer’s series formula for hydrogen and helium including the reduction of the universal Rydberg constant to merely known numerical quantities, whereby even the small discrepancies for hydrogen and helium were recognized as essentially determined by the weak motion of the heavy atom nucleus. Investigation then turned to other series in the optical and the Röntgen spectrum using the extremely fruitful Ritz combination principle, which was at last revealed clearly in all its fundamental significance.
Whoever, in view of the numerous agreements which in the case of the special accuracy of spectroscopic measurements could lay claim to particularly striking confirmatory power, might have been still inclined to feel that it was all attributable to the play of chance, would been forced, finally, to discard even his last doubt, as A. Sommerfeld showed that from a logical extension of the laws of quantum distribution in systems with several degrees of freedom, and out of consideration of the variability of the inertial mass in accordance with the relativity theory, that magic formula arose before which both the hydrogen and the helium spectrum had to reveal the riddle of their fine structure, to such an extent that the finest present-day measurements, those of F. Paschen, could be explained generally through it – an achievement fully comparable with that of the famous discovery of the planet Neptune whose existence and orbit was calculated by Leverrier before the human eye had seen it. Progressing further along the same path, P. Epstein succeeded in fully explaining the Stark effect of the electrical splitting up of the spectral lines, P. Debye produced a simple explanation of the K-series of the Röntgen spectrum, which had been investigated by Manne Siegbahn, and now followed a great number of further experiments, which illuminated with more or less success the dark secrets of the construction of the atom.
After all these results, towards whose complete establishment still many reputable names ought essentially to have been mentioned here, there is no other decision left for a critic who does not intend to resist the facts, than to award to the quantum of action, which by each different process in the colourful show of processes, has ever-again yielded the same result, namely, 6.52 x 10-27 erg sec, for its magnitude, full citizenship in the system of universal physical constants. It must certainly appear a unique coincidence that just in that time when the ideas of general relativity have broken through, and have led to fantastic results, Nature should have revealed an “absolute” in a place where it could be least expected, an invariable unit, in fact, by means of which the action quantity, contained in a space-time element, can be represented by a completely definite non-arbitrary number, and thereby divested itself of its (until now) relative character.
To be sure, the introduction of the quantum of action has not yet produced a genuine quantum theory. In fact, the path the research worker must yet tread to it is not less than that from the discovery of the velocity of light by Olaf Römer to the establishment of Maxwell’s theory of light. The difficulties which the introduction of the quantum of action into the well-tried classical theory has posed right from the start have already been mentioned by me. During the course of the years they have increased rather than diminished, and if, in the meantime, the impetuous forward-driving research has passed to the order of the day for some of these, temporarily, the gaps left behind, awaiting subsequent filling, react even harder upon the conscientious systematologist. What serves in Bohr’s theory as a basis to build up the laws of action, is assembled out of specific hypotheses which, up to a generation ago, would undoubtedly have been flatly rejected altogether by every physicist. The fact that in the atom, certain quite definite quantum-selected orbits play a special role, might be taken still as acceptable, less easily however, that the electrons, circulating in these orbits with definite acceleration, radiate no energy at all. The fact that the quite sharply defined frequency of an emitted photon should be different from the frequency of the emitting electron must seem to a theoretical physicist, brought up in the classical school, at first sight to be a monstrous and, for the purpose of a mental picture, a practically intolerable demand.
But numbers decide, and the result is that the roles, compared with earlier times, have gradually changed. What initially was a problem of fitting a new and strange element, with more or less gentle pressure, into what was generally regarded as a fixed frame has become a question of coping with an intruder who, after appropriating an assured place, has gone over to the offensive; and today it has become obvious that the old framework must somehow or other be burst asunder. It is merely a question of where and to what degree. If one may make a conjecture about the expected escape from this tight comer, then one could remark that all the signs suggest that the main principles of thermodynamics from the classical theory will not only rule unchallenged but will more probably become correspondingly extended. What the armchair experiments meant for the foundation of classical thermodynamics, the adiabatic hypothesis of P. Ehrenfest means, provisionally, to the quantum theory; and in the same way as R. Clausius, as a starting point for the measurement of entropy, introduced the principle that, when treated appropriately, any two states of a material system can, by a reversible process, undergo a transition from one to the other, now the new ideas of Bohr’s open up a very similar path into the interior of a wonderland hitherto hidden from him.
There is in particular one problem whose exhaustive solution could provide considerable elucidation. What becomes of the energy of a photon after complete emission? Does it spread out in all directions with further propagation in the sense of Huygens’ wave theory, so constantly taking up more space, in boundless progressive attenuation? Or does it fly out like a projectile in one direction in the sense of Newton’s emanation theory? In the first case, the quantum would no longer be in the position to concentrate energy upon a single point in space in such a way as to release an electron from its atomic bond, and in the second case, the main triumph of the Maxwell theory – the continuity between the static and the dynamic fields and, with it, the complete understanding we have enjoyed, until now, of the fully investigated interference phenomena – would have to be sacrificed, both being very unhappy consequences for today’s theoreticians.
Be that as it may, in any case no doubt can arise that science will master the dilemma, serious as it is, and that which appears today so unsatisfactory will in fact eventually, seen from a higher vantage point, be distinguished by its special harmony and simplicity. Until this aim is achieved, the problem of the quantum of action will not cease to inspire research and fructify it, and the greater the difficulties which oppose its solution, the more significant it finally will show itself to be for the broadening and deepening of our whole knowledge in physics.