Generated via advancing front / Delayne algorithm, surface grid, marched in normal direction to lines of maximum gradient. CPU time \(1.5 \times 10^{-1}\) s on Mac M1.

## Revisiting \(e\) History

\(e\), defined by the limit \(\lim_{n \to \infty} \left(1 + \frac{1}{n}\right)^n\) and approximating 2.71828, is characterized by its irrational and transcendental nature, indicating it cannot be depicted as a quotient of two integers nor as a solution to any non-trivial polynomial equation with rational coefficients.

John Napier, a Scottish mathematician and theologian, significantly contributed to the field of mathematics through his invention of logarithms, detailed in his seminal work *“Mirifici Logarithmorum Canonis Descriptio,”* (1614). This development facilitated the transformation of cumbersome multiplicative calculations into simpler additive operations. Napier’s logarithms laid the groundwork for the natural logarithm concept, where \(e\) emerged as the fundamental base that equates the value of the logarithm to unity.

One hundred years later, Jacob Bernoulli, Swiss, explored the concept of compound interest, leading to the discovery of \(e\)’s significance in financial mathematics. His investigations, as detailed in his posthumously published research “*Ars Conjectandi*,” (1713), demonstrated that continuous compounding yields a growth factor of \(e\), showing \(e\)’s utility in not only financial models but also in natural processes like population dynamics and radioactive decay.

References

- Napier, John (1614), Mirifici Logarithmorum Canonis Descriptio.
- Bernoulli, Jakob (1713), Ars conjectandi, opus posthumum. Accedit Tractatus de seriebus infinitis, et epistola gallicé scripta de ludo pilae reticularis.

## Removal of Tenure at the University of Florida

A university cannot exist without academic freedom. Academic freedom is the core value under which American universities have operated for nearly the last century. It is outlined in multiple works, including the Chicago Principles and by the American Association of University Professors. Academic freedom is protected by the tenure system. Tenure, which was popularized in America, is the reason why the country today has so many excellent universities. Without tenure, a university is one in name only.

Less than two years ago, I wrote that I earned and was awarded tenure at University of Florida. See https://saemiller.com/2022/07/01/associate-professor-and-tenure/. When I came home last night, I realized that tenure no longer exists at University of Florida, due to the new policies being put in place by all administrators at the university.

Though I have tenure at the University of Florida, I believe it is in name only. It is a facade that covers an Iron Tower, that has transitioned from the Ivory Tower.

References:

## Binary’s Origin

Binary numbers were originally used for encryption and communication, a fact recognized as early as the 17th century by Francis Bacon. Bacon used the binary system for encoding the alphabet using strings of binary characters. This laid the framework for subsequent developments in coded communication, such as technologies like the telegraph (Samuel Morse), which relied on a binary tones of ‘dots’ and ‘dashes.’ Binary’s mathematical formulation was created by Gottfried Wilhelm Leibniz, who recognized the system’s elegance. Leibniz’s research created a formal basis for binary arithmetic, outlining methods for converting between binary, decimal, and other number systems.

References:

- Bacon, F. (1605), “The Advancement of Learning.” Book VI. London: Henrie Tomes.
- Leibniz, G.W. (1703), “Explication de l’Arithmétique Binaire.” Mémoires de l’Académie Royale des Sciences, pp. 85-89. Paris.

## In Florida by Professor Hofmann

Recently, one of the distinguished professors of English at University of Florida published a perspective of being in Florida in the time of academic turmoil. I wish I had the command of the English language like Professor Hofmann to express myself. Please see:

https://www.lrb.co.uk/blog/2024/april/in-florida

## Earth Density and Cavendish, 1798

Newton knew that the force of gravity causes falling objects near the Earth’s surface (such as the famous apple) to accelerate toward the Earth at a rate of 9.8 m/s2. He also knew that the Moon accelerated toward the Earth at a rate of 0.00272 m/s2. If it was the same force that was acting in both instances, Newton had to come up with a plausible explanation for the fact that the acceleration of the Moon was so much less than that of the apple. What characteristic of the force of gravity caused the more distant Moon’s rate of acceleration to be a mere 1/3600th of the acceleration of the apple?

It seemed obvious that the force of gravity was weakened by distance. But what was the formula for determining it? An object near the Earth’s surface is approximately 60 times closer to the center of the Earth than the Moon is. It is roughly 6,350, km from the surface to the center of the Earth and the Moon orbits at a distance of 384,000, \text{km}$ from the Earth. The Moon experiences a force of gravity that is 1/3600 that of the apple. Newton realized that the force of gravity follows an inverse square law \(6,350 \times 60 \approx 384,000\)).

In 1798, by careful experiment, Henry Cavendish succeeded in making an accurate determination of *G*, the gravitational constant, as \(6.67 \times 10^{-11}\). This meant that the mass of the Earth could now be determined. A 1-kg mass at the Earth’s surface is approximately 6.3 Mm from the center of the Earth, and the force acting on it is approximately 10 N. So, by using these values into the gravity equation, we can find that the mass of the Earth is roughly \(6 \times 10^{24} , \text{kg}\).

See Cavendish, H., Experiments to Determine the Density of the Earth. Philosophical Transactions of the Royal Society of London, 1798.

## Rodi and Algebraic Stress Models

Rodi examined nonlinear algebraic stress models by approximating the convective transport terms of the Reynolds stress tensor and normalizing Reynolds stress with turbulent kinetic energy, coupled with a transport equation for turbulent kinetic energy. This approach simplifies the Reynolds stress transport terms, resulting in an algebraic equation essential for determining the Reynolds stress tensor. This led to the development of algebraic stress models, where Reynolds stress is normalized by turbulent kinetic energy and includes a production and dissipation term. This introduces a nonlinear relationship with the dissipation tensor and pressure strain correlation tensor. These models, referred to as algebraic stress models in turbulence literature, have been further developed, showing their capability in predicting secondary flow effects in ducts and other applications. However, nonlinear equation solvers cause numerical issues in implementation in CFD codes.

## Creation of Probability

On chance – Ancient civilizations, despite their engagement in games of chance and divinatory practices, did not formalize the underlying principles of probability. The creation of formal probability theory is linked with gambling and divination, stretching back to antiquity. However, the mathematical formulation of chance / probability remained elusive until the 17th century surrounding the city of light. Sporadic pre-17th -century attempts to quantify chance, highlights the absence of a systematic approach and the prevailing attitudes toward destiny and divination. Central to this is the exchange between Blaise Pascal and Pierre de Fermat, initiated by the enigmatic “problem of points.” This problem, arising from a premature ending of dice games, required a division of stakes based on potential game outcomes. Correspondance between Fermat and Pascal showed the solution and also laid the groundwork for a new mathematical discipline. The Pascal-Fermat dialogue (in a beautiful and short collection of letters) created a surge in research for probability theory. It resulted in the contributions of Christiaan Huygens and J. Bernoulli, showing the establishment of the normal distribution by Abraham de Moivre. While initial research into probability were predominantly theoretical, it went from dice gambling in France to more general methods for other natural sciences. For example, the application in actuarial science for life expectancy prediction and the eventual integration into statistical analysis.

Early works and outcome are shown in Huygens, C. (1657). De Ratiociniis in Aleae Ludo. Considerations on Dice Play, Bernoulli, J. (1713). Ars Conjectandi. The Art of Conjecturing, de Moivre, A. (1738). The Doctrine of Chances, and Pascal, B., & Fermat, P. Correspondence on the Problem of Points.

## Saffman \(k-\omega^2\)

Saffman’s \(k-\omega^2\) turbulence model, initiated by Saffman’s research, plays a role in the two-equation models dedicated to turbulence research since the time of Kolmogorov in the 1940’s.

The basics of Saffman’s model is shown in the portrayal of a statistically steady or ‘slowly varying’ inhomogeneous turbulence field alongside the mean velocity distribution. This model states that turbulence can be described by ‘densities’ adhering to nonlinear diffusion equations. These equations account for a spectrum of phenomena, including convection by mean flow, amplification due to interaction with a mean velocity gradient, dissipation from turbulence interaction, and self-interaction diffusion.

Central to the Saffman model are two key equations: the energy equation and the \(\omega^2\) equation. The energy equation integrates terms for the amplification of energy owing to the mean velocity gradient and dissipation attributable to vorticity, coupled with a diffusion term governed by eddy viscosity. This eddy viscosity also facilitates the diffusion of mean momentum by turbulent fluctuations. The \(\omega^2\) equation, which governs the changes of vorticity density within the turbulent field, stands as the definitive feature, setting the Saffman model apart by explicitly considering the behavior of vorticity density. This is a bit different than specific dissipation \(\omega\)).

Saffman’s model is further demonstrated through analytical and numerical solutions for a variety of flow scenarios, including Couette flow, plane Poiseuille flow, and free turbulent flows. The model’s capacity to predict phenomena such as the von Kármán constant in the law of the wall, via estimations of dimensionless constants within the equations. This is interesting given the early 1970’s publication date. The Saffman \(k-\omega\) is historically important for these reasons for early developments of two-equation \(k-\omega\) models.

## Baldwin Barth One-Equation Model Reviewed

During the present semester, I reexamined the Baldwin-Barth one-equation turbulence model. This model constitutes a reformulation of the $k$ and $\epsilon$ equations, culminating in a single partial differential equation for the turbulent eddy viscosity, denoted as $\nu_t$, multiplied by the turbulent Reynolds number, $Re_t$. The model’s closure for the Reynolds-averaged Navier-Stokes (RANS) equations was a major advancement in turbulence modeling, laying the groundwork for the renowned Spalart-Allmaras model. Though, the SA model was also influenced by Soviet research.

A notable aspect of this model is the intuitive appeal of the turbulent Reynolds number for engineers, coupled with its measurability, which simplifies the specification of boundary conditions at inlet boundary conditions. Despite its innovative closure approach, the model’s widespread adoption was hindered by several limiting factors. Nevertheless, it served as a foundational framework for subsequent research efforts, many of which remain highly relevant in contemporary applications.