Earth Density and Cavendish, 1798

Newton knew that the force of gravity causes falling objects near the Earth’s surface (such as the famous apple) to accelerate toward the Earth at a rate of 9.8 m/s2. He also knew that the Moon accelerated toward the Earth at a rate of 0.00272 m/s2. If it was the same force that was acting in both instances, Newton had to come up with a plausible explanation for the fact that the acceleration of the Moon was so much less than that of the apple. What characteristic of the force of gravity caused the more distant Moon’s rate of acceleration to be a mere 1/3600th of the acceleration of the apple?

It seemed obvious that the force of gravity was weakened by distance. But what was the formula for determining it? An object near the Earth’s surface is approximately 60 times closer to the center of the Earth than the Moon is. It is roughly 6,350, km from the surface to the center of the Earth and the Moon orbits at a distance of 384,000, \text{km}$ from the Earth. The Moon experiences a force of gravity that is 1/3600 that of the apple. Newton realized that the force of gravity follows an inverse square law \(6,350 \times 60 \approx 384,000\)).

In 1798, by careful experiment, Henry Cavendish succeeded in making an accurate determination of G, the gravitational constant, as \(6.67 \times 10^{-11}\). This meant that the mass of the Earth could now be determined. A 1-kg mass at the Earth’s surface is approximately 6.3 Mm from the center of the Earth, and the force acting on it is approximately 10 N. So, by using these values into the gravity equation, we can find that the mass of the Earth is roughly \(6 \times 10^{24} , \text{kg}\).

See Cavendish, H., Experiments to Determine the Density of the Earth. Philosophical Transactions of the Royal Society of London, 1798.

Rodi and Algebraic Stress Models

Rodi examined nonlinear algebraic stress models by approximating the convective transport terms of the Reynolds stress tensor and normalizing Reynolds stress with turbulent kinetic energy, coupled with a transport equation for turbulent kinetic energy. This approach simplifies the Reynolds stress transport terms, resulting in an algebraic equation essential for determining the Reynolds stress tensor. This led to the development of algebraic stress models, where Reynolds stress is normalized by turbulent kinetic energy and includes a production and dissipation term. This introduces a nonlinear relationship with the dissipation tensor and pressure strain correlation tensor. These models, referred to as algebraic stress models in turbulence literature, have been further developed, showing their capability in predicting secondary flow effects in ducts and other applications. However, nonlinear equation solvers cause numerical issues in implementation in CFD codes.

Creation of Probability

On chance – Ancient civilizations, despite their engagement in games of chance and divinatory practices, did not formalize the underlying principles of probability. The creation of formal probability theory is linked with gambling and divination, stretching back to antiquity. However, the mathematical formulation of chance / probability remained elusive until the 17th century surrounding the city of light. Sporadic pre-17th -century attempts to quantify chance, highlights the absence of a systematic approach and the prevailing attitudes toward destiny and divination. Central to this is the exchange between Blaise Pascal and Pierre de Fermat, initiated by the enigmatic “problem of points.” This problem, arising from a premature ending of dice games, required a division of stakes based on potential game outcomes. Correspondance between Fermat and Pascal showed the solution and also laid the groundwork for a new mathematical discipline. The Pascal-Fermat dialogue (in a beautiful and short collection of letters) created a surge in research for probability theory. It resulted in the contributions of Christiaan Huygens and J. Bernoulli, showing the establishment of the normal distribution by Abraham de Moivre. While initial research into probability were predominantly theoretical, it went from dice gambling in France to more general methods for other natural sciences. For example, the application in actuarial science for life expectancy prediction and the eventual integration into statistical analysis.

Early works and outcome are shown in Huygens, C. (1657). De Ratiociniis in Aleae Ludo. Considerations on Dice Play, Bernoulli, J. (1713). Ars Conjectandi. The Art of Conjecturing, de Moivre, A. (1738). The Doctrine of Chances, and Pascal, B., & Fermat, P. Correspondence on the Problem of Points.

Saffman \(k-\omega^2\)

Saffman’s \(k-\omega^2\) turbulence model, initiated by Saffman’s research, plays a role in the two-equation models dedicated to turbulence research since the time of Kolmogorov in the 1940’s.

The basics of Saffman’s model is shown in the portrayal of a statistically steady or ‘slowly varying’ inhomogeneous turbulence field alongside the mean velocity distribution. This model states that turbulence can be described by ‘densities’ adhering to nonlinear diffusion equations. These equations account for a spectrum of phenomena, including convection by mean flow, amplification due to interaction with a mean velocity gradient, dissipation from turbulence interaction, and self-interaction diffusion.

Central to the Saffman model are two key equations: the energy equation and the \(\omega^2\) equation. The energy equation integrates terms for the amplification of energy owing to the mean velocity gradient and dissipation attributable to vorticity, coupled with a diffusion term governed by eddy viscosity. This eddy viscosity also facilitates the diffusion of mean momentum by turbulent fluctuations. The \(\omega^2\) equation, which governs the changes of vorticity density within the turbulent field, stands as the definitive feature, setting the Saffman model apart by explicitly considering the behavior of vorticity density. This is a bit different than specific dissipation \(\omega\)).

Saffman’s model is further demonstrated through analytical and numerical solutions for a variety of flow scenarios, including Couette flow, plane Poiseuille flow, and free turbulent flows. The model’s capacity to predict phenomena such as the von Kármán constant in the law of the wall, via estimations of dimensionless constants within the equations. This is interesting given the early 1970’s publication date. The Saffman \(k-\omega\) is historically important for these reasons for early developments of two-equation \(k-\omega\) models.

Baldwin Barth One-Equation Model Reviewed

During the present semester, I reexamined the Baldwin-Barth one-equation turbulence model. This model constitutes a reformulation of the $k$ and $\epsilon$ equations, culminating in a single partial differential equation for the turbulent eddy viscosity, denoted as $\nu_t$, multiplied by the turbulent Reynolds number, $Re_t$. The model’s closure for the Reynolds-averaged Navier-Stokes (RANS) equations was a major advancement in turbulence modeling, laying the groundwork for the renowned Spalart-Allmaras model. Though, the SA model was also influenced by Soviet research.

A notable aspect of this model is the intuitive appeal of the turbulent Reynolds number for engineers, coupled with its measurability, which simplifies the specification of boundary conditions at inlet boundary conditions. Despite its innovative closure approach, the model’s widespread adoption was hindered by several limiting factors. Nevertheless, it served as a foundational framework for subsequent research efforts, many of which remain highly relevant in contemporary applications.

Reflections on Spalart-Allmarus Turbulence Model, 2024

The Spalart-Allmaras turbulence model, a one-equation turbulence model, was a response to the inadequacies observed in zero-equation models, particularly their lack of predictive accuracy in complex flow scenarios such as wakes, shear layers, and shock wave boundary layer interactions.

The creation of the Spalart-Allmaras model was influenced by multiple prior works, including the Baldwin Barth model and the insights gained from then developed two-equation models. Notably, the works of Soviet mathematicians played a pivotal role. Their contributions, though primarily published in Russian, provided a foundation for the development of the model by Spalart and Allmaras.

Central to the Spalart-Allmaras model is the equation for tracking eddy viscosity, which features a production term derived from vorticity magnitude, signifying the generation of turbulence. This is complemented by a diffusion term and a destruction term, tailored to account for the diffusion and dissipation of turbulence, respectively. The model underwent significant refinement to ensure its applicability to fully turbulent flows, highlighting the mathematical craftsmanship behind its formulation.

A distinctive feature of the model is its treatment of transitional flows through a source-like term, enabling the prediction of transition behaviors based on user inputs. This aspect, however, introduces a dependency on the distance to the wall parameter, which some later models have sought to mitigate.

The Spalart-Allmaras model’s appeal lies in its locality and independence from structured grid requirements, making it a versatile tool for various flow conditions, particularly in unstructured grid environments. Its simplicity, robustness, and ease of implementation are evident in the model’s formulation presented in the paper’s appendix, making it accessible to a wider audience within the computational fluid dynamics community. This is one of the reasons it is so successful

Despite its empirical foundations and term by term (e.g. terms inspired by previous models and knowledge of experiments) development approach in terms of term by term creation, the model resembles similarities to the $Re_t$ one-equation model (also from NASA Ames). While it may not rival direct models like the k-epsilon in terms of derivation from first principles, the Spalart-Allmaras model’s pragmatic approach and calibrations have set it apart in the field as one of the most popular models internationally. The model’s development was significantly aided by access to experimental data and collaborations with NASA’s high-performance computing research group at NASA Ames, among others.

See https://turbmodels.larc.nasa.gov/spalart.html

AIAA Journal – Fully Parabolized Hypersonic Sonic Boom Prediction with Real Gas and Viscous Effects

https://doi.org/10.2514/1.J063425

Abstract: We present a methodology to predict the aerodynamic near-field and sonic boom signature from slender bodies and waveriders using a fully parabolized approach. We solve the parabolized Navier–Stokes equations, which are integrated via spatial marching in the streamwise direction. We find that unique physics must be accounted for in the hypersonic regime relative to the supersonic, which includes viscous, nonequilibrium, and real gas effects. The near-field aerodynamic pressure is propagated through the atmosphere to the ground via the waveform parameter method. To illustrate the approach, three bodies are analyzed: the Sears–Haack geometry, the HIFiRE-5, and a power-law waverider. Ambient Mach numbers range from 4 through 15. The viscous stress tensor is essential for accurate hypersonic prediction. For example, viscous effects increase near-field and sonic boom overpressure by 15.7 and 8.49%, respectively, for the Sears–Haack geometry. The difference between viscous and inviscid predictions of the near-field is due to the hypersonic boundary layer. The computational cost for predicting the near-field is approximately 6.6% relative to fully nonlinear computational fluid dynamics.

Pendulum, Time, and Stokes’

In 1582, an observation by Galileo Galilei at the Pisa Cathedral marked an important moment in understanding of oscillatory motion. Galileo, noting the constant period of a swinging lamp despite diminishing amplitude, laid the foundation for the study of pendulums. This led to his discovery that a pendulum’s oscillation period is directly proportional to the square root of its length, $T = l^{1/2}$, independent of the mass – a principle termed isochronism.

Galileo’s insights into pendulum motion were not only profound, but also practical. Although he conceptualized a pendulum clock, it was Christiaan Huygens in 1656 who realized this vision, significantly enhancing timekeeping accuracy. Prior mechanical clocks, reliant on controlled descent of weights, suffered from substantial time deviations. Huygens’ integration of a pendulum to govern the escapement mechanism allowed for far more precise time measurement, with the pendulum’s period adjustable to exactly one second by altering the mass’s position.

Huygens’ relentless pursuit of perfection led him to address the pendulum’s inherent inaccuracy due to its circular swing arc. By designing a pendulum that followed a cycloidal path, he sought to achieve true isochronism, irrespective of the amplitude. This innovation allowed for larger swing angles, essential for the mechanical operation of clocks, marking a significant leap in timekeeping precision.

Galileo’s curiosity and studies by Huygens not only advanced our understanding of harmonic motion, but also revolutionized the way we measure time, culminating in the creation of the first accurate mechanical clocks. This narrative underscores the profound impact of observational curiosity and rigorous scientific inquiry on technological advancement.

This whole methodology was further improved by Sir G. G. Stokes’, who formulated more accruate pendulum predictions via the creation of the viscous stress tensors and solutions around spherical bodies. This created more accurate drag calculations for spherical mass on the bottom of the pendulum rod.

Origins of Complex Numbers

The creation of complex numbers is found in the exploration of square roots of negative numbers, a notion that seemed incongruous within Euclid’s axioms and then present rules governing integers. The problem presented by the square root of negative numbers spurred a significant shift in thinking, leading to the conceptualization and acceptance of “imaginary” numbers, as termed by René Descartes.

The Italian mathematician Girolamo Cardano, in the 16th century, was among the first to acknowledge that while the square root of a negative number may not reside within the realm of real numbers, it could indeed possess an “imaginary” essence. This realization paved the way for Rafael Bombelli, who meticulously outlined how equations could be solved using complex numbers, thereby introducing a “real” component alongside an “imaginary” component based on the unit i, thanks to Leonhard Euler’s notation.

Complex numbers are expressed as a combination of these two components, for instance, 3+2i. The introduction of the complex plane by Jean-Robert Argand enriched the understanding of complex numbers, offering a graphical representation that plots real and imaginary components on perpendicular axes. This innovative approach demystified complex numbers and also laid the groundwork for advanced mathematical constructs like quaternions, introduced by William Rowan Hamilton, which extend complex numbers into a four-dimensional space.