This is a simple set of code that automatically finds airfoils, optimizing for the lift-to-drag ratio blockMesh
turned out to be very painful, so the meshing is handled by curiosityFluids' excellent mesher (blog post)
SciPy's differential evolution is taken as an optimization algorithm. It's far slower than other methods, but using a global optimizer here seems like the better choice. Other gradient-free algorithms like Nelder-Mead also found reasonable airfoils and were much faster, however.
The simulation is then ran. If any issues are encountered with the meshing, blockMesh
, or simpleFoam
, the code returns
The result is a CSV containing airfoil parameters and their performance. These can be further post-processed with ParaView.
Overall, I'm surprised at how smoothly this project went. The existing repos helped a lot, especially with meshing. I found it an interesting introduction into coupling optimization methods and non-trivial simulations. I'm still impressed at how effective differential evolution was - with a previous meshing-template, it was able to find and exploit flaws with ease.
This assumes you already have OpenFOAM installed - I am using the OpenFOAM.com/ESI version. You can clone the repo. Source the OpenFOAM functions first. For the ESI-version, this can be done with
source /usr/lib/openfoam/openfoam2406/etc/bashrc
This exposes functions like blockMesh
and simpleFoam
. Dependencies were kept to a minimum: pandas
, numpy
, matplotlib
, scipy
, and optionally sklearn
. Once these are installed, run python main.py
.
The code is quite minimal. This is partially on purpose. If you are using OpenFOAM, you are used to editing code files. It's also quite difficult to create a sufficiently flexible way of working with these things without oversimplifying. The intended way of using the code is to look through everything and try to understand it. Everything should be quite straight-forward and readable.
The code uses an OpenFOAM template folder, which is repeatedly copied into several directories, depending on the number of workers specified, using UUIDs as names. Relevant parameters are then entered into these templates (such as the adjusted blockMesh
, and U
) As soon as a case completes (either because of errors, or because it correctly finished), the folder is deleted. Since OpenFOAM requires its folders in a particular structure, differential evolution can easily be parallelized, and because each individual case runs fast, this seemed like a better choice than decomposing the domain across multiple cores. With the current method, there should be almost no overhead from parallelization.
The code can post-process the top-n highest scoring airfoils so far, simulating each of them. The results can subsequently be rendered with a ParaView Python macro. First, run
python main.py --custom
This places the top-n runs under custom_runs
. Follow this by
python src/post_processing/post_process.py
If all goes well, this places all the results under results/renders
.
The latter isn't entirely reliable: ParaView includes its own Python-distribution based on Python 3.10. If you encounter errors here, it's best to either look at the code, or open each .foam
under custom_runs
individually.
The repo includes an analysis-notebook. This tracks performance over time, and allows for selection of the best-performing airfoils.
I started off with the default parameters from a case I found, which used
It turns out,
I tested some of these for one of the airfoils that previously messed up, and it appears there is almost no difference between the smaller values.
Case | ||||
---|---|---|---|---|
Original | 0.516667 | -0.000555 | -931.013514 | |
OpenFOAM default | 1.13645 | -0.0742927 | -15.295453 | |
Lower NASA-bound | 1.1369 | -0.0743574 | -15.287780 | |
Far lower value | 1.13665 | -0.0743269 | -15.290234 |
I will continue to use
Another interesting issue; some airfoils never converged with SIMPLE, instead oscillating at different
With that, we get an interesting population. Three out of the top-four are all very different; the best performer is a fairly standard airfoil, albeit a bit thick. The next-best is almost bird-like, and the third is high-camber instead. It's surprising to see such variation even after a fairly long run.
I added code to evaluate the lift-drag ratio as a function of the angle-of-attack (AoA). The fact that the airfoil performs best at 5° isn't very surprising; it's optimized for that point. For higher angles, performance rapidly decreases. I considered multi-objective optimization to create an airfoil that performs well over a wider range, but that would be very slow to run.
One interesting aspect here: I ran this up to 45°, but did not obtain sufficiently converged solutions there. Examining the forces for the highest AoA where simpleFOAM
did not simply crash, we observe an oscillatory solution - I think this is basically the air detaching from the airfoil, resulting in a kind of Kármán vortex street, with SIMPLE unable to converge to a steady-state solution. Examining ParaView here, we indeed see oscillations in the flow field.
This behavior previously caused a lot of issues with very small or negative
I am curious about potential model reduction: by predicting performance based on the six inputs, a lot of time could be saved. If a rough prediction on which airfoils perform best is accurate, a simple machine learning model like random forests could be used for an initial optimization stage. I doubt a simple model like this would be sufficient, but it's an interesting avenue to explore.
After some attempts, it seems surprisingly good. I get MAEs of 1.5 - 5, for a small training set, even for the best-performing airfoils, where there is correspondingly less data available. This is technically a surrogate model-approach.
After running it a bit longer, it gets better and better; I'm very surprised. We do have a fair amount of data, but this is spread out in six dimensions; the curse of dimensionality should be kicking in here, yet somehow, even with quite sparse data, it's doing well. However, I am not using a randomly sampled set; the data is all from an optimizer, so it's likely to be clustered around certain regions, effectively reducing dimensionality.
I am curious about training the random forest or some other simple machine learning model, and optimizing over that instead - then verifying the results using OpenFOAM. I should also create a fully random training sample and evaluate that, to avoid clustering around certain types of airfoils. Difficulty there, is that a lot of potential airfoils are simply not meshable, or solvable, because of clipping and other odd shapes. Obtaining a representative sample that is not clustered around existing, realistic airfoils that are in the dataset may be difficult.