Cadence’s Dr. Charles Hirsch added to his list of accolades in the world of computational fluid dynamics when he accepted an invitation to deliver a keynote speech at the 22nd Annual CFD Symposium of the Aeronautical Society of India on 11 Aug 2021. During this speech Dr. Hirsch shared his perspectives on the technologies that promise to overcome some of the barriers on the path to next-generation CFD capabilities.
In particular, he shared how efficient exploitation of scale-resolving simulations (SRS), high-performance computing (HPC) systems, artificial intelligence and machine learning (AI/ML) can help us better understand turbulence, a flow’s chaotic fluctuation, that is both difficult to model and key to accurate simulation results.
While today’s CFD is a broadly accepted discipline that has impacted industries ranging from aerospace to automotive and marine, there are plenty of opportunities for advancement. For transportation systems, reducing emissions and fuel consumption relies on us taking the road toward advancements in turbulence and computational efficiency.
When a fluid such as air or water moves smoothly over a body, it flows in layers that don’t mix and is called a laminar flow (from the Latin lamina meaning thin layers). We also talk about a body being streamlined such that the flow moves over it along smooth lines.
In many practical applications of fluid flow however, something happens that interrupts the fluid’s smooth motion. The layers begin mixing, starting small and then growing into chaotic fluctuations called turbulence. Turbulence can become so great that the flow detaches from the body over which it is flowing, a phenomenon called separation. The transition from laminar to turbulent flow, the chaotic behavior of the turbulent flow, and why the flow separates are challenges for fluid dynamics.
Suffice it to say that turbulence and separation often (but not always) has a negative impact on the performance of a design: an aircraft’s wing loses lift, an automobile experiences more drag, a turbine generates less power.
Figure 1: CFD simulation showing a snapshot of turbulent flow over a row of turbine blades.
The chaotic motion of a turbulent flow can be difficult to account for when solving the Navier-Stokes equations, the governing equations of fluid flow. The widely used Reynolds-Averaged Navier-Stokes (RANS) approach describes a statistically averaged flow by introducing mathematical terms called the Reynolds Stresses. The Reynolds Stresses, which represent the totality of the effects of the turbulent fluctuations on the averaged flow, are modeled with supplemental equations called turbulence models. Unfortunately, current turbulence models provide good predictions only when the flow is relatively well behaved (i.e. attached or not separated).
A 2014 NASA-funded report, the CFD Vision 2030 Study , succinctly documents the limitations of CFD in this regard. “In spite of considerable successes, reliable use of CFD has remained confined to a small but important region of the operating design space due to the inability of current methods to reliably predict turbulent separated flows.”
To emphasize the significance of the challenge of turbulent and separated flows, a conference presentation from 2019 jointly authored by Boeing and Airbus  details specific examples where CFD’s current limits restrict the portion of the flight envelope that can be addressed by CFD. These include examples include juncture flows (where the trailing edge of the wing meets the fuselage), low-speed high-lift (i.e. landing) configurations, high-speed buffet, and wing icing. Removing these limitations and allowing CFD to be reliably applied across the entire flight envelope can lead to the ability to certify an aircraft for flight by simulation versus flight test.
Knowing the fluid dynamics challenges of turbulence, the question becomes how to move beyond RANS to techniques that will allow us to more accurately capture the flow physics and deliver more reliable design insights to the engineer.
Figure 2: A hierarchy of scale-resolving simulation techniques from RANS to DNS including unsteady RANS (URANS), large eddy simulation (LES), wall-modeled LES (WMLES), hybrid RANS/LES, detached eddy simulation (DES), large eddy simulation (LES), and wall-resolved LES (WRLES).
Fortunately, there are many alternatives to RANS that move along a spectrum of the scale of the fluid phenomena to be computed directly versus modeled. RANS is at the low end of the resolution scale as noted previously. Moving up the pyramid in Figure 2 increases the resolution by decreasing the scale of the phenomena to be computed directly. At the peak is Direct Numerical Simulation (DNS) in which all scales of the statistical turbulent fluid phenomena are computed directly.
The flip side of this situation is that the computational requirements increase significantly as one resolves smaller and smaller length scales (i.e. as one moves up the pyramid in Figure 2). And therein lies the challenge; how to obtain the benefits of scale-resolving simulations without their computational cost.
The HiFi-Turb Project  promises to overcome this challenge by taking a new approach to improving the fidelity of turbulent flow simulations using RANS. After all, RANS simulations are the most computationally efficient and therefore well-suited for industrial applications.
The project will be applying scale-resolving simulations (SRS) to a targeted suite of benchmark problems that exhibit industrially relevant turbulent separated flow such as
These SRS will exploit the latest hybrid CPU/GPU architectures in order to produce results most efficiently. They will also exploit the latest flow solver technology in the form of High-Order Methods (HOM) to better manage the degrees of freedom in a simulation, another efficiency technique. Taken in combination, these solver technologies will allow a database of turbulence phenomena to be computed in the most efficient manner.
Guided by a team of turbulence modeling experts, AI and ML algorithms will then be applied to this database of high-fidelity results to derive new, more robust, and more physically accurate turbulence models for use in RANS simulations.
The discussion of turbulence modeling has emphasized the computational costs. RANS simulations currently take hours and days of the most advanced HPC platforms. SRS will require significantly more. Therefore, exploring other techniques for reducing computational cost makes sense.
The effective us of AI can potentially increase productivity by orders of magnitude to the point of providing real-time performance maps and solution fields.
An AI approach to CFD would consist of two parts: training the AI model, and using the AI model. Consider as an example the NASA Rotor 37 transonic compressor benchmark case. The AI model has been trained with 300 separate CFD solutions comprising combinations of 49 design parameters and three speed lines (with the compressor rotating at 100%, 95%, and 90% of its nominal speed). After training the AI model with this dataset, the model was then able to accurately predict (relative to CFD) the 97.3% speed line as seen in Figure 3.
Figure 3: Prediction of the 97.3% speed line (center datasets) with AI is remarkably accurate when compared with the CFD while being orders of magnitude faster.
Even the field data produced by the AI model are accurately predicted when compared to CFD results as seen in Figure 4.
Figure 4: The pressure fields from CFD (top) and AI (bottom) are nearly indistinguishable.
Given these positive results, the possibilities for use of AI in design optimization are threefold.
Scale-resolving simulations are bringing us a deeper understanding of turbulent flows and they are able to do so with ever-increasing efficiency due to exploitation of high-performance computing platforms and high-order numerical algorithms. This deeper understanding can be exploited by artificial intelligence and machine learning algorithms to provide insights that will improve turbulence models for today’s Reynolds-Averaged Navier-Stokes simulations. And taken unto themselves, AI and ML offer the potential to decrease by orders of magnitude the computational cost of simulations making real-time performance mapping during the design cycle a reality.