Removal of Tenure at the University of Florida

A university cannot exist without academic freedom. Academic freedom is the core value under which American universities have operated for nearly the last century. It is outlined in multiple works, including the Chicago Principles and by the American Association of University Professors. Academic freedom is protected by the tenure system. Tenure, which was popularized in America, is the reason why the country today has so many excellent universities. Without tenure, a university is one in name only.

Less than two years ago, I wrote that I earned and was awarded tenure at University of Florida. See When I came home last night, I realized that tenure no longer exists at University of Florida, due to the new policies being put in place by all administrators at the university.

Though I have tenure at the University of Florida, I believe it is in name only. It is a facade that covers an Iron Tower, that has transitioned from the Ivory Tower.


Binary’s Origin

Binary numbers were originally used for encryption and communication, a fact recognized as early as the 17th century by Francis Bacon. Bacon used the binary system for encoding the alphabet using strings of binary characters. This laid the framework for subsequent developments in coded communication, such as technologies like the telegraph (Samuel Morse), which relied on a binary tones of ‘dots’ and ‘dashes.’ Binary’s mathematical formulation was created by Gottfried Wilhelm Leibniz, who recognized the system’s elegance. Leibniz’s research created a formal basis for binary arithmetic, outlining methods for converting between binary, decimal, and other number systems.


  • Bacon, F. (1605), “The Advancement of Learning.” Book VI. London: Henrie Tomes.
  • Leibniz, G.W. (1703), “Explication de l’Arithmétique Binaire.” Mémoires de l’Académie Royale des Sciences, pp. 85-89. Paris.

Earth Density and Cavendish, 1798

Newton knew that the force of gravity causes falling objects near the Earth’s surface (such as the famous apple) to accelerate toward the Earth at a rate of $9.8 \text{m/s}^2$. He also knew that the Moon accelerated toward the Earth at a rate of $0.00272 \text{m/s}^2$. If it was the same force that was acting in both instances, Newton had to come up with a plausible explanation for the fact that the acceleration of the Moon was so much less than that of the apple. What characteristic of the force of gravity caused the more distant Moon’s rate of acceleration to be a mere $1/3600$th of the acceleration of the apple?

It seemed obvious that the force of gravity was weakened by distance. But what was the formula for determining it? An object near the Earth’s surface is approximately 60 times closer to the center of the Earth than the Moon is. It is roughly $6,350, \text{km}$ from the surface to the center of the Earth and the Moon orbits at a distance of $384,000, \text{km}$ from the Earth. The Moon experiences a force of gravity that is 1/3600 or $1/(60)^2$ that of the apple. Newton realized that the force of gravity follows an inverse square law $(6,350 \times 60 \approx 384,000)$.

In 1798, by careful experiment, Henry Cavendish succeeded in making an accurate determination of $G$, the gravitational constant, as $6.67 \times 10^{-11}$. This meant that the mass of the Earth could now be determined. A 1-kg mass at the Earth’s surface is approximately 6.3 Mm from the center of the Earth, and the force acting on it is approximately 10 N. So, by using these values into the gravity equation, we can find that the mass of the Earth is roughly $6 \times 10^{24} , \text{kg}$.

See Cavendish, H., Experiments to Determine the Density of the Earth. Philosophical Transactions of the Royal Society of London, 1798.

Rodi and Algebraic Stress Models

Rodi examined nonlinear algebraic stress models by approximating the convective transport terms of the Reynolds stress tensor and normalizing Reynolds stress with turbulent kinetic energy, coupled with a transport equation for turbulent kinetic energy. This approach simplifies the Reynolds stress transport terms, resulting in an algebraic equation essential for determining the Reynolds stress tensor. This led to the development of algebraic stress models, where Reynolds stress is normalized by turbulent kinetic energy and includes a production and dissipation term. This introduces a nonlinear relationship with the dissipation tensor and pressure strain correlation tensor. These models, referred to as algebraic stress models in turbulence literature, have been further developed, showing their capability in predicting secondary flow effects in ducts and other applications. However, nonlinear equation solvers cause numerical issues in implementation in CFD codes.

Creation of Probability

On chance – Ancient civilizations, despite their engagement in games of chance and divinatory practices, did not formalize the underlying principles of probability. The creation of formal probability theory is linked with gambling and divination, stretching back to antiquity. However, the mathematical formulation of chance / probability remained elusive until the 17th century surrounding the city of light. Sporadic pre-17th -century attempts to quantify chance, highlights the absence of a systematic approach and the prevailing attitudes toward destiny and divination. Central to this is the exchange between Blaise Pascal and Pierre de Fermat, initiated by the enigmatic “problem of points.” This problem, arising from a premature ending of dice games, required a division of stakes based on potential game outcomes. Correspondance between Fermat and Pascal showed the solution and also laid the groundwork for a new mathematical discipline. The Pascal-Fermat dialogue (in a beautiful and short collection of letters) created a surge in research for probability theory. It resulted in the contributions of Christiaan Huygens and J. Bernoulli, showing the establishment of the normal distribution by Abraham de Moivre. While initial research into probability were predominantly theoretical, it went from dice gambling in France to more general methods for other natural sciences. For example, the application in actuarial science for life expectancy prediction and the eventual integration into statistical analysis.

Early works and outcome are shown in Huygens, C. (1657). De Ratiociniis in Aleae Ludo. Considerations on Dice Play, Bernoulli, J. (1713). Ars Conjectandi. The Art of Conjecturing, de Moivre, A. (1738). The Doctrine of Chances, and Pascal, B., & Fermat, P. Correspondence on the Problem of Points.

Saffman $k-\omega^2$

Saffman’s $k-\omega^2$ turbulence model, initiated by Saffman’s research, plays a role in the two-equation models dedicated to turbulence research since the time of Kolmogorov in the 1940’s.

The basics of Saffman’s model is shown in the portrayal of a statistically steady or ‘slowly varying’ inhomogeneous turbulence field alongside the mean velocity distribution. This model states that turbulence can be described by ‘densities’ adhering to nonlinear diffusion equations. These equations account for a spectrum of phenomena, including convection by mean flow, amplification due to interaction with a mean velocity gradient, dissipation from turbulence interaction, and self-interaction diffusion.

Central to the Saffman model are two key equations: the energy equation and the $\omega^2$ equation. The energy equation integrates terms for the amplification of energy owing to the mean velocity gradient and dissipation attributable to vorticity, coupled with a diffusion term governed by eddy viscosity. This eddy viscosity also facilitates the diffusion of mean momentum by turbulent fluctuations. The $\omega^2$ equation, which governs the changes of vorticity density within the turbulent field, stands as the definitive feature, setting the Saffman model apart by explicitly considering the behavior of vorticity density. This is a bit different than specific dissipation ($\omega$).

Saffman’s model is further demonstrated through analytical and numerical solutions for a variety of flow scenarios, including Couette flow, plane Poiseuille flow, and free turbulent flows. The model’s capacity to predict phenomena such as the von Kármán constant in the law of the wall, via estimations of dimensionless constants within the equations. This is interesting given the early 1970’s publication date. The Saffman $kω^2$ is historically important for these reasons for early developments of two-equation $k-\omega$ models.

Baldwin Barth One-Equation Model Reviewed

During the present semester, I reexamined the Baldwin-Barth one-equation turbulence model. This model constitutes a reformulation of the $k$ and $\epsilon$ equations, culminating in a single partial differential equation for the turbulent eddy viscosity, denoted as $\nu_t$, multiplied by the turbulent Reynolds number, $Re_t$. The model’s closure for the Reynolds-averaged Navier-Stokes (RANS) equations was a major advancement in turbulence modeling, laying the groundwork for the renowned Spalart-Allmaras model. Though, the SA model was also influenced by Soviet research.

A notable aspect of this model is the intuitive appeal of the turbulent Reynolds number for engineers, coupled with its measurability, which simplifies the specification of boundary conditions at inlet boundary conditions. Despite its innovative closure approach, the model’s widespread adoption was hindered by several limiting factors. Nevertheless, it served as a foundational framework for subsequent research efforts, many of which remain highly relevant in contemporary applications.

Reflections on Spalart-Allmarus Turbulence Model, 2024

The Spalart-Allmaras turbulence model, a one-equation turbulence model, was a response to the inadequacies observed in zero-equation models, particularly their lack of predictive accuracy in complex flow scenarios such as wakes, shear layers, and shock wave boundary layer interactions.

The creation of the Spalart-Allmaras model was influenced by multiple prior works, including the Baldwin Barth model and the insights gained from then developed two-equation models. Notably, the works of Soviet mathematicians played a pivotal role. Their contributions, though primarily published in Russian, provided a foundation for the development of the model by Spalart and Allmaras.

Central to the Spalart-Allmaras model is the equation for tracking eddy viscosity, which features a production term derived from vorticity magnitude, signifying the generation of turbulence. This is complemented by a diffusion term and a destruction term, tailored to account for the diffusion and dissipation of turbulence, respectively. The model underwent significant refinement to ensure its applicability to fully turbulent flows, highlighting the mathematical craftsmanship behind its formulation.

A distinctive feature of the model is its treatment of transitional flows through a source-like term, enabling the prediction of transition behaviors based on user inputs. This aspect, however, introduces a dependency on the distance to the wall parameter, which some later models have sought to mitigate.

The Spalart-Allmaras model’s appeal lies in its locality and independence from structured grid requirements, making it a versatile tool for various flow conditions, particularly in unstructured grid environments. Its simplicity, robustness, and ease of implementation are evident in the model’s formulation presented in the paper’s appendix, making it accessible to a wider audience within the computational fluid dynamics community. This is one of the reasons it is so successful

Despite its empirical foundations and term by term (e.g. terms inspired by previous models and knowledge of experiments) development approach in terms of term by term creation, the model resembles similarities to the $Re_t$ one-equation model (also from NASA Ames). While it may not rival direct models like the k-epsilon in terms of derivation from first principles, the Spalart-Allmaras model’s pragmatic approach and calibrations have set it apart in the field as one of the most popular models internationally. The model’s development was significantly aided by access to experimental data and collaborations with NASA’s high-performance computing research group at NASA Ames, among others.


AIAA Journal – Fully Parabolized Hypersonic Sonic Boom Prediction with Real Gas and Viscous Effects

Abstract: We present a methodology to predict the aerodynamic near-field and sonic boom signature from slender bodies and waveriders using a fully parabolized approach. We solve the parabolized Navier–Stokes equations, which are integrated via spatial marching in the streamwise direction. We find that unique physics must be accounted for in the hypersonic regime relative to the supersonic, which includes viscous, nonequilibrium, and real gas effects. The near-field aerodynamic pressure is propagated through the atmosphere to the ground via the waveform parameter method. To illustrate the approach, three bodies are analyzed: the Sears–Haack geometry, the HIFiRE-5, and a power-law waverider. Ambient Mach numbers range from 4 through 15. The viscous stress tensor is essential for accurate hypersonic prediction. For example, viscous effects increase near-field and sonic boom overpressure by 15.7 and 8.49%, respectively, for the Sears–Haack geometry. The difference between viscous and inviscid predictions of the near-field is due to the hypersonic boundary layer. The computational cost for predicting the near-field is approximately 6.6% relative to fully nonlinear computational fluid dynamics.