Baldwin Barth One-Equation Model Reviewed

During the present semester, I reexamined the Baldwin-Barth one-equation turbulence model. This model constitutes a reformulation of the $k$ and $\epsilon$ equations, culminating in a single partial differential equation for the turbulent eddy viscosity, denoted as $\nu_t$, multiplied by the turbulent Reynolds number, $Re_t$. The model’s closure for the Reynolds-averaged Navier-Stokes (RANS) equations was a major advancement in turbulence modeling, laying the groundwork for the renowned Spalart-Allmaras model. Though, the SA model was also influenced by Soviet research.

A notable aspect of this model is the intuitive appeal of the turbulent Reynolds number for engineers, coupled with its measurability, which simplifies the specification of boundary conditions at inlet boundary conditions. Despite its innovative closure approach, the model’s widespread adoption was hindered by several limiting factors. Nevertheless, it served as a foundational framework for subsequent research efforts, many of which remain highly relevant in contemporary applications.

Reflections on Spalart-Allmarus Turbulence Model, 2024

The Spalart-Allmaras turbulence model, a one-equation turbulence model, was a response to the inadequacies observed in zero-equation models, particularly their lack of predictive accuracy in complex flow scenarios such as wakes, shear layers, and shock wave boundary layer interactions.

The creation of the Spalart-Allmaras model was influenced by multiple prior works, including the Baldwin Barth model and the insights gained from then developed two-equation models. Notably, the works of Soviet mathematicians played a pivotal role. Their contributions, though primarily published in Russian, provided a foundation for the development of the model by Spalart and Allmaras.

Central to the Spalart-Allmaras model is the equation for tracking eddy viscosity, which features a production term derived from vorticity magnitude, signifying the generation of turbulence. This is complemented by a diffusion term and a destruction term, tailored to account for the diffusion and dissipation of turbulence, respectively. The model underwent significant refinement to ensure its applicability to fully turbulent flows, highlighting the mathematical craftsmanship behind its formulation.

A distinctive feature of the model is its treatment of transitional flows through a source-like term, enabling the prediction of transition behaviors based on user inputs. This aspect, however, introduces a dependency on the distance to the wall parameter, which some later models have sought to mitigate.

The Spalart-Allmaras model’s appeal lies in its locality and independence from structured grid requirements, making it a versatile tool for various flow conditions, particularly in unstructured grid environments. Its simplicity, robustness, and ease of implementation are evident in the model’s formulation presented in the paper’s appendix, making it accessible to a wider audience within the computational fluid dynamics community. This is one of the reasons it is so successful

Despite its empirical foundations and term by term (e.g. terms inspired by previous models and knowledge of experiments) development approach in terms of term by term creation, the model resembles similarities to the $Re_t$ one-equation model (also from NASA Ames). While it may not rival direct models like the k-epsilon in terms of derivation from first principles, the Spalart-Allmaras model’s pragmatic approach and calibrations have set it apart in the field as one of the most popular models internationally. The model’s development was significantly aided by access to experimental data and collaborations with NASA’s high-performance computing research group at NASA Ames, among others.

See https://turbmodels.larc.nasa.gov/spalart.html

AIAA Journal – Fully Parabolized Hypersonic Sonic Boom Prediction with Real Gas and Viscous Effects

https://doi.org/10.2514/1.J063425

Abstract: We present a methodology to predict the aerodynamic near-field and sonic boom signature from slender bodies and waveriders using a fully parabolized approach. We solve the parabolized Navier–Stokes equations, which are integrated via spatial marching in the streamwise direction. We find that unique physics must be accounted for in the hypersonic regime relative to the supersonic, which includes viscous, nonequilibrium, and real gas effects. The near-field aerodynamic pressure is propagated through the atmosphere to the ground via the waveform parameter method. To illustrate the approach, three bodies are analyzed: the Sears–Haack geometry, the HIFiRE-5, and a power-law waverider. Ambient Mach numbers range from 4 through 15. The viscous stress tensor is essential for accurate hypersonic prediction. For example, viscous effects increase near-field and sonic boom overpressure by 15.7 and 8.49%, respectively, for the Sears–Haack geometry. The difference between viscous and inviscid predictions of the near-field is due to the hypersonic boundary layer. The computational cost for predicting the near-field is approximately 6.6% relative to fully nonlinear computational fluid dynamics.

Pendulum, Time, and Stokes’

In 1582, an observation by Galileo Galilei at the Pisa Cathedral marked an important moment in understanding of oscillatory motion. Galileo, noting the constant period of a swinging lamp despite diminishing amplitude, laid the foundation for the study of pendulums. This led to his discovery that a pendulum’s oscillation period is directly proportional to the square root of its length, $T = l^{1/2}$, independent of the mass – a principle termed isochronism.

Galileo’s insights into pendulum motion were not only profound, but also practical. Although he conceptualized a pendulum clock, it was Christiaan Huygens in 1656 who realized this vision, significantly enhancing timekeeping accuracy. Prior mechanical clocks, reliant on controlled descent of weights, suffered from substantial time deviations. Huygens’ integration of a pendulum to govern the escapement mechanism allowed for far more precise time measurement, with the pendulum’s period adjustable to exactly one second by altering the mass’s position.

Huygens’ relentless pursuit of perfection led him to address the pendulum’s inherent inaccuracy due to its circular swing arc. By designing a pendulum that followed a cycloidal path, he sought to achieve true isochronism, irrespective of the amplitude. This innovation allowed for larger swing angles, essential for the mechanical operation of clocks, marking a significant leap in timekeeping precision.

Galileo’s curiosity and studies by Huygens not only advanced our understanding of harmonic motion, but also revolutionized the way we measure time, culminating in the creation of the first accurate mechanical clocks. This narrative underscores the profound impact of observational curiosity and rigorous scientific inquiry on technological advancement.

This whole methodology was further improved by Sir G. G. Stokes’, who formulated more accruate pendulum predictions via the creation of the viscous stress tensors and solutions around spherical bodies. This created more accurate drag calculations for spherical mass on the bottom of the pendulum rod.

Origins of Complex Numbers

The creation of complex numbers is found in the exploration of square roots of negative numbers, a notion that seemed incongruous within Euclid’s axioms and then present rules governing integers. The problem presented by the square root of negative numbers spurred a significant shift in thinking, leading to the conceptualization and acceptance of “imaginary” numbers, as termed by René Descartes.

The Italian mathematician Girolamo Cardano, in the 16th century, was among the first to acknowledge that while the square root of a negative number may not reside within the realm of real numbers, it could indeed possess an “imaginary” essence. This realization paved the way for Rafael Bombelli, who meticulously outlined how equations could be solved using complex numbers, thereby introducing a “real” component alongside an “imaginary” component based on the unit i, thanks to Leonhard Euler’s notation.

Complex numbers are expressed as a combination of these two components, for instance, 3+2i. The introduction of the complex plane by Jean-Robert Argand enriched the understanding of complex numbers, offering a graphical representation that plots real and imaginary components on perpendicular axes. This innovative approach demystified complex numbers and also laid the groundwork for advanced mathematical constructs like quaternions, introduced by William Rowan Hamilton, which extend complex numbers into a four-dimensional space.

Linear to Nonlinear Relations in Wave Science (Acoustics)

In the realm of acoustics or wave science, the transition from linear to nonlinear physics marks a significant evolution in the understanding of tones and their generation. The foundation of this understanding dates back to Pythagoras, who established a linear relationship between the length of a plucked string and the resultant musical tone. This principle posited that the pitch produced by a string can be modulated linearly by altering its length.

A shift occurred in the 1580s with Vincenzo Galilei, the father of the renowned Galileo, challenging the prevailing linear thinking. Galilei’s experiments revealed a more complex, nonlinear relationship in the production of musical tones, particularly when it came to varying the tension of a string. Contrary to the linear assumption that increasing tension produced higher pitches in a directly proportional manner, Galilei discovered that the pitch interval was related to the square of the string’s tension, \(T^2\). This finding showed a nonlinear relationship in the generation of acoustic tones, extending beyond strings to wind instruments, where the pitch interval varied as the cube of the vibrating air volume, \(V^3\).

The implications of Galilei’s discovery were profound, demonstrating that an interval of a perfect fifth could be achieved through multiple nonlinear pathways: strings differing in length by a ratio of 3:2, in tension by a factor of 9:4, or wind instrument air volumes by a ratio of 27:8. This nonlinear understanding fundamentally altered the approach to musical acoustics, showing a method for a exploration of the intricate relationships that govern the generation of musical tones. This led to the study of nonlinear systems.

Brian Spalding

One last poem by turbulence / numerics researcher Prof. Brian Spalding

I shall have no regrets when I am dead.

Of deadlines none will matter but my own.

Unwritten papers? Hopelessly misled.

Inheritors? All claimants I’ll disown.

Yet hope, while still alive, there’ll be but few

Who think: I was a fool to trust him.

Now that he’s gone, what am I going to do?

None I would hope; but guess the chance is slim.

Yet, in that soon-to-close window of time,

There’s much I want to do; and think I can.

Always too optimistic is what I’m

Dismissed as. To disprove it is my plan.

‘After such labours,’ I would have it said,

‘It must be truly blissful to be dead.’

Brian Spalding

Returning to Ludwig Prandtl’s One-Equation Model

In my turbulence class this semester, I recently reviewed Prandtl’s one-equation model, which was developed over 20 years since the time of boundary theory in the early 1900s. The major paper by Ludwig Prandtl was published in the early 1940s. He presented the first one-equation turbulence model for the closure of the boundary layer equations, specifically for incompressible flow. He calibrated coefficients via channel measurements, which were carefully conducted. Therefore, the prediction of the model in terms of modeled coefficients corresponded exactly with his experiments. Many later investigators, through the early 1990s and one particular recent paper in the Royal Society published in 2023, reexamined the model and its analysis. Many have programmed this model in computational fluid dynamics, and it is the basis of many one-equation models, especially those based on a k or turbulent kinetic energy equation. However, it is generally shown that the model is not predictive for many flows without modification. Some have misattributed Prandtl’s model to a reduction of Andrei Kolmogorov’s two-equation model from 1945. Prandtl was unlikely to know about the Soviet invention during World War II, and it can be truly attributed to him. We should attribute to Ludwig Prandtl the invention of the world’s first one-equation turbulence model, even though it may be flawed, this is only because of the lack of the contemporary digital computer.

Geometrics and Art

The Renaissance, a period of significant intellectual, artistic, and cultural rebirth, marked the combination of art and science, especially through the application of geometric principles in artistic representation. This era witnessed the pioneering development of linear perspective, a technique that revolutionized the way depth and three-dimensional objects were portrayed on two-dimensional surfaces. The mathematical foundation for perspective, which shows that parallel lines appear to converge at a distant point, was established in Italy during the 15th century and was instrumental in creating more lifelike and spatially coherent artworks​​.

Leonardo da Vinci, emblematic of the Renaissance “universal genius,” exemplified the integration of scientific inquiry with artistic mastery. His work, along with that of other notable figures such as Paulo Uccello and Piero della Francesca, underscores the period’s drive towards a deeper understanding of the natural world and its representation through art. This pursuit was not confined to Italy, but over Europe, influencing a wide range of artistic expressions and leading to a distinct new art form in mid-16th century Germany, characterized by polyhedral-based geometrical designs​​.

The Renaissance was not only a time of artistic flourishing, but also a critical juncture in the history of science, with the synthesis of mathematics, geometry, and art propelling forward the modern scientific worldview. The artist-engineers of the Renaissance, with their detailed studies of nature and commitment to empirical observation, laid the groundwork for the subsequent developments in science and engineering. Their legacy is a testament to the enduring power of interdisciplinary inquiry and the intrinsic relationship between art and mathematics​​​​.

Additional Thoughts on Half-Equation Model of Johnson and King

The Johnson King turbulence model represented a significant advancement in the understanding and modeling of turbulent flows. Introduced amidst the exploration of first and second equation models, the Johnson King model distinguished itself through the innovative concept of a half-equation model, emphasizing the critical role of memory in turbulence phenomena.

The early stages of turbulence model development were characterized by efforts to create predictive models, a task complicated by the limitations of computational power and the scarcity of high-quality experimental data. Unlike its contemporaries, the Johnson King model introduced a new approach by tracking the ratio of non-equilibrium flow through a single ordinary differential equation (ODE). This innovative strategy, which led to the designation of the model as a half-equation model, did not rely on a new closure equation involving a partial differential equation but rather utilized an ODE to describe the evolution of turbulence.

A key strength of the Johnson King model lies in its predictive capabilities, especially when compared to previous models, such as those based on the work of Cebeci and Smith. The model’s inclusion of turbulence history and the consideration of non-equilibrium effects allowed for a more accurate depiction of the amplification or dissipation of turbulent kinetic energy, particularly in the separation in boundary layers due to shock waves. This focus on the history and memory effects in turbulent flows marked a significant departure from equilibrium-based models and contributed to a deeper understanding of turbulence dynamics. Furthermore, it got away from concepts of one-point statistics of algebraic models.

The practical implications of the Johnson King model were underscored by its performance on VAX computer systems (the most popular DEC system of the day), where it demonstrated superior efficiency compared to one or two equation models. The model’s ODE was specifically valid across a line in the axial or streamwise direction through the flow, at points of maximum Reynolds’ stress. This focus on a streamlined computational approach not only enhanced the model’s speed and accuracy but also laid the groundwork for future developments in the field.

The Johnson King turbulence model, with its emphasis on memory and history effects, has played a pivotal role in the evolution of turbulence modeling. By introducing the half-equation concept and highlighting the importance of non-equilibrium effects, this model has contributed to a more nuanced understanding of turbulent flows and their underlying mechanisms in shock separation. The legacy of the Johnson King model continues to influence contemporary turbulence research, underscoring the lasting impact of their innovative approach.