Home
  • Products
  • Solutions
  • Support
  • Company
  • Products
  • Solutions
  • Support
  • Company
Community Blogs Computational Fluid Dynamics I’m Julio Mendez and This Is How I Mesh

Author

John Chawner
John Chawner

Community Member

Blog Activity
Options
  • Subscriptions

    Never miss a story from Computational Fluid Dynamics. Subscribe for in-depth analysis and articles.

    Subscribe by email
  • More
  • Cancel
CFD
OpenACC
openMP
fluid dynamics
Corrdesa

I’m Julio Mendez and This Is How I Mesh

17 Mar 2023 • 14 minute read

Hello, I’m Julio Mendez, a CFD Scientist currently working at Corrdesa and using CFD to study electrochemical applications.

My journey in CFD started in 2007 when I was looking for a topic for my undergraduate thesis at La Universidad del Zulia in my hometown in Venezuela. I knocked on a door that changed my entire life in one year. I did not know how to write “Hello world,” and ended up expanding an in-house CFD solver with turbulence capabilities in FORTRAN 77. I see that from the experience I have today, I realize that it was not a big deal, but for someone who did not know anything about numerical computation, it was a great deal. After I defended my undergraduate thesis, I was invited to enroll in the master's degree program in Computational Thermal Sciences. That was the best academic experience, opening different opportunities later on.

John: Are undergraduate theses typical? They weren’t when I was an undergrad although, I had a great time doing an independent study with a professor who asked me and a friend to bring a shock tube back to life.

Julio: Yes, it is mandatory. You have to do also a senior project in which my team had the pleasure of designing a steam power plant. I was in charge of the condenser design and the piping system. Because of that exposure, I had my first work experience in an Engineering Consulting firm called Inelmeca, where I spent the most beautiful five years of my professional career. You have to choose wisely the projects that can positively impact your life. I stumbled upon CFD just by coincidence with my thesis, but all my background was on thermal fluids and not applied math and numerical computations.

In 2013, I applied for a scholarship to do my Ph.D., and luckily, I got a reply! The professor was interested in my work during my undergraduate thesis and not even my master’s degree thesis! (Now you know why that door changed my life!). I started my Ph.D. in 2014 at North Carolina A&T State University, working again in turbulence modeling with applications in marine and wind turbines using the parabolized Navier-Stokes Equations. Still, my advisor realized we needed an extra kick in our research to make substantial progress. That decision also changed the course of my Ph.D. I started taking classes in the computer science department, and that is how I was exposed to the beautiful world of High-Performance Computing (HPC). I was so fortunate to have one of the best professors in the area, Dr. Kenneth Flurchick (RIP), who later became my HPC mentor and he was part of my Ph.D. committee. That experience taught me the inside-out of numerical computation from hardware to software development with applications to HPC systems. During my Ph.D. research, we also explored some ideas about temporal-spatial Large Eddy Simulations. Unfortunately, none of that research touched the real world because funds were cut. Those cuts forced me to move to another research area, the one that allowed me to become Dr. Mendez. My final Ph.D. thesis was in hypersonic/supersonic flows, where I co-developed a new algorithm for supersonic/hypersonic flows. Figure 1 shows the numerical solution of the sonic jet cross-flow interaction with a freestream Mach number of 6, and the Mach number of the cross-injection was fixed at Mach 1.



Figure 1. Streamlines with V-Velocity contours. (The Sonic Jet Cross Flow Interaction)

In addition to working on the development side of the new framework, I was in charge of porting the source code to HPC, a task I enjoyed and learned a lot about programming. Here, I worked with multiple systems and programming models: CUDA, OpenMP, OpenACC, MPI, and a combination of them. I think that all the experience shaped the type of engineer/scientist I am nowadays. My interests vary from computer science to numerical analysis. Throughout all these years, I’ve worked on linear solvers, explicit techniques, numerical methods, and high-fidelity numerical computation. I have recently developed tools for the Design of Experiments (DoE) and surrogate modeling to accelerate engineering designs.

As you all can imagine, it has been hard for me to stop doing fundamental research. This is the part that I enjoyed the most as a CFD scientist. I continue collaborating with academia, working with my previous Ph.D. research group and other groups or independently. I currently live in Newnan, GA, and work as a CFD Engineer at Corrdesa LLC. We are a small company, but we have a small-size HPC system (240 CPUs) and good workstations. For example, my workstation is a Dell Precision 5820, and if I need to use a word that describes my work is “picky.” I need to work on that because I always strive for perfection, but sometimes it is necessary to draw a line and move forward.

What do you see are the biggest challenges facing CFD in the next 5 years?

I have to cite the slow progress we see in numerical methods or the transition from high-fidelity numerical methods to industrial applications. I know that most of these developments are still in the early stage, but most of the big CFD companies are incorporating these new methodologies at a slow pace. On the other hand, other new companies have decided to build their new products based on these more recent algorithms, which I find pretty nice.

Another challenge is data. We create a lot of data; sometimes, it is tough to manage it. That is even more complicated in research, for example, DNS studies and things like that.

Lastly, things evolve so fast that it is pretty challenging to keep up and even maintain previous developments. For example, we all know that CUDA is a very famous language vendor-dependent, and many new HPC systems will not support solvers that leverage CUDA. We see things similar to this in the HPC arena and many more with open-source libraries.

What are you currently working on?

I am currently working on two different areas. First, in academia, I am collaborating with Professor Tapan K. Sengupta's group in a high-fidelity numerical computation using theta (Argonne's supercomputers). This collaboration is something I've always wanted to do. You know, it is like a dream come true when a professor you admire, respect, and follow his work allows you to join his group in international collaboration.

On the other hand, I am working on multiple federal/commercial projects at Corrdesa LLC. These projects are related to the design of tools for electrochemical applications, and I am also working on creating a corrosion toolset to make the process of CFD simulation streamlined. Sometimes, these tools are needed in an area where the end user is not a CFD engineer but somebody in sustainability. The objective is to pre-package all the complexity behind CFD in a GUI where the user can test multiple scenarios without needing to be a CFD expert.

Due to the nature of my work, I cannot share anything that I do commercially, but here is a picture of the draft we submitted for the AIAA Aviation Forum about the collaboration with Professor Sengupta. This is not the pretty picture we are used to seeing in CFD, but it has an important value. We are using Argonne's supercomputer (thanks to the DD allocation program). This plot shows the average wall time vs. (grid points/processor). The preliminary results obtained on theta (ALCF supercomputer) suggest a linear speed-up at the resolution used in this study. Notice we are using 32.77 billion points with 266 K CPUs. I am very proud of this work, where we are resolving Direct Numerical Simulation of Rayleigh-Taylor Instability, which is triggered by an acoustic mechanism involving infra-to-ultrasonic pulses that travel to either side of the interface.



Figure 2. The average wall time taken per time step to perform Runge-Kutta time integration

John: You’re getting linear scaling on 266,000 CPUs on a mesh with over 32 billion points? Is there anything specific you can cite that makes that possible?

Julio: First, it is important to acknowledge Professor Sengupta’s contribution and work. His team developed this code and I am just collaborating with this recent study, but that magnificent job was carried out by his group. I had the same conversation with professor Sengupta a few months ago, and to recap, everything boils down to cache hits and overlapping computation and communication. Fortunately, most compilers do a great job optimizing the code, but as a developer, you have to be smart about how to write your solver, and for HPC applications, you need to be very smart on how you can overlap communication with computation. This is one of the most important aspects when developing codes for scientific applications. Again, I was not part of the group that developed the code, but the development team did an outstanding job. This is demonstrated with linear scalability. As you increase the number of cores (while keeping the grid fixed), you are making the subdomain at the core level smaller. Therefore, there will be instances where most of the calculation fits on the cache, and your solver exploits the cache hit.

In summary, I see this in the following way. Communication time is considerably high for a few numbers of cores. Also, you have a larger problem at the core level that needs to fetch data on the main memory. As you increase the number of cores, you reduce the total amount of memory allocated per processor (smaller domain); hence we have more cache hits than cache misses. In addition to that, we increase the number of PEs (cores), and therefore we send more packets to more neighbors, such packets are smaller, and hence we are reducing the message volume (latency). This is my opinion, and perhaps it is wrong, so take it with a pinch of salt.

What project are you most proud of and why?

There are multiple projects I feel proud of for sure, but I think there are two I am very proud to have stamped my last name on them. One of them is a paper I presented last year at AIAA SciTech. What makes that paper unique is that I did that after my Ph.D., and somehow, I did everything alone. It was my opportunity to drive my idea, more CFD development, write proposals to get the HPC resources, and so on. It feels pretty well when you see all the things you can accomplish while working hard.

The objective of the paper was to validate further the IDS (Integro-Differential Scheme) that I worked with during my Ph.D. This time, I wanted to carry more in-depth analysis of the scheme, and we present several computations demonstrating the flow physics capturing capabilities of the new scheme. We investigate 2D solutions of the stratified Kelvin-Helmholtz instability shear layer, the Taylor-Green Vortex, and the Riemann problem. This is our first attempt to provide a thorough study of inviscid and viscous flows. You can see the difference in the resolution between Figure 1 and the next figure. I used the Bridges system, which NSF supports, award number ACI-1445606, at the Pittsburgh Supercomputing Center (PSC). I obtained these resources through XSEDE.



Figure 3. Density field for the stratified Kelvin-Helmholtz problem.

Professionally speaking, there is a project I am very happy I was part of. I was the developer in charge of creating a numerical computational workflow for ECM (Electrochemical Applications). When the project started, no CFD commercial package could handle large mesh deformation out of the box, so all the previous attempts failed because of that. When the mesh stretched too much, you had to stop the computation and manually remesh everything after making multiple modifications to the computational domain—imagine doing this 50 - 100 times only for one tool design iteration. Hence, I proposed to create a script that measured multiple mesh metrics every time step and, based on certain criteria and predictions, remesh the entire domain before we encountered negative volumes or the mesh quality dropped to a point where the solver just diverged. For that, I decided to go with Simcenter StarCCM+ and Fidelity Pointwise. These two packages allowed me to create my workflow for ECM applications. The results were outstanding! I hope to publish something soon.

Are you reading any interesting technical papers we should know about?

Yes, I try to read papers frequently. I found interesting ideas, and more importantly, it keeps you up to date with new methodologies. I am reading multiple papers with multiple objectives. For example, I recently read the note “Using Machine Learning for formulating new wall functions for Large Eddy Simulation: A second attempt" by L. Davidson. I wanted to know Professor Davidson’s view on ML for LES. I have followed his work, and his contribution was very useful while I was in my Ph.D.

I recently read “On the role of spectral properties of viscous flux discretization for flow simulation on marginally resolved grids." This paper is from Seven Frankel’s group. If you work on LES, you have to know his work.

I also read recently “A High Accuracy Preserving Parallel Algorithm for Compact Schemes for DNS,” and “Analysis of Pseudo-spectral methods used for numerical simulation of Turbulence,” both from Professor Tapan’s group. I am very interested in CO2 capture, and I have read some attempts from the CFD group to model CO2 absorption through source terms instead of modeling the reaction itself. One nice paper is "Numerical Modeling of CO2 Absortion" by D. Asendrych.

Apart from that, I frequently revisit professor Denaro's papers about his work in LES; outstanding work! A paper that was an eye-opener for me on LES and numerical computation is "What does Finite Volume-based implicit filtering really resolve in Large-Eddy Simulations?” from Professor Filippo Maria Denaro.

What software or tools do you use every day?

Simcenter StarCCM+, and Fidelity Pointwise. When it comes post-processing, I use Paraview and Tecplot. My go-to program to post-process data and do many more things in Python, and for CFD development, I use Fortran.

What does your workspace look like?

I work from home and also go to the office a few times per week, for meetings and to catch up with the team. Due to some restrictions, I cannot post pictures from the office, but I am sharing a picture from my home office the “Batcave” this is where most of the things take place.

What do you do outside the world of CFD?

I love planes, and I have taking flight lessons on my bucket list, so I read a lot about the science behind flying, and I watch VLOGs on YouTube. I read about planes for General aviation. I like watching movies with my wife and daughter and traveling to visit family in Florida and North Carolina.

What is some of the best CFD advice you’ve ever received?

“Not because it shines, it is gold”. When we are new in this field, we see the colors from a simulation as something magical, and we tend to take them for granted. So, when I started in CFD, my mentor recommended being very critical of the results, to question my results more than anything else. To try to always shut down my study and never settle down with anything that I was not proud to stamp my last name on it. CFD is a wonderful area, but it is easy to fall into the trap of becoming a button pusher. You always need to question yourself about the results. You must continue your learning process because we use computers to resolve complex problems, but in the back end, there are a lot of things going on.

Also, my mentor in Venezuela did not allow me to sit down at a computer without studying each page of Patankar's book. He used to quiz me every Friday, but that helped me a lot later when I started programming CFD solvers, and after four years in CFD, I started using commercial packages. That was back when you had to do your mesh on Gambit. Fortunately, we have Fidelity Pointwise today!

If I have to summarize all the good recommendations, start from the bottom up, and do not take shortcuts!

If you had to pick a place to have dinner, where would you go?

Without a doubt, I’d like to come back to a restaurant in my home country where I have beautiful memories with my father and my mother. The restaurant is in Los Puertos de Altagracia, in Zulia, Venezuela. The restaurant is in front of the lake, and the food is super fresh and delicious. I love los risos, which is a kind of fish stick with butter and cheese. There is another great restaurant called Chuita where they serve the freshest fish. They do not give cutlery to the guests because that is how we eat fish in that area, in front of the lake. Beautiful memories, for sure.

John: I’m trying to imagine eating fish without cutlery – and failing. Have you found any authentic Venezuelan food here in the U.S.?

Julio: Not really, although you can find good Venezuelan food in Florida. For example, empanadas, pastelitos y tequenos. They are very popular, and they are readily available in Florida.

John: Thank you for taking the time for this interview.

Julio: Thank you, John, for allowing this Venezuelan fellow to introduce himself in your blog. The CFD community acknowledges and appreciates your contribution to this area. It is an incredible pleasure to share my journey in CFD and HPC, which skyrocketed when I moved to this beautiful country, where I had the opportunity to study and work with such talented people! Happy computations and "Muchas gracias!"


© 2023 Cadence Design Systems, Inc. All Rights Reserved.

  • Terms of Use
  • Privacy
  • Cookie Policy
  • US Trademarks
  • Do Not Sell or Share My Personal Information