Evolution Strategies (ES) were originally proposed in the 1960s by two German students Rechenberg and Schwefel at Technical University of Berlin (TUB). For a detailed introduction to the interesting history of ES, we strongly suggest to refer to e.g., [Beyer, 2023, GECCO], [Hansen et al., 2015], [Bäck et al., 2013], [Beyer&Schwefel, 2002], and especially recalls from two early pioneers (e.g., [ACM SIGEVOlution, 2008], [ACM SIGEVOlution, 2010]). In this book, we mainly focus on modern ES versions and variants: Covariance Matrix Adaptation ES (CMA-ES/MA-ES), Nature ES (NES), Limited-Memory CMA (LM-CMA), OpenAI-ES, Persistent ES (PES), and Meta-/Distributed ES (DES). Despite as one of the three earliest (from 1960s) families of evolutionary algorithms with genetic algorithms and evolutionary programming, now ES are still studied widely in the evolutionary computation community and are still applied to approximate the global/local optimum of many (though not all) challenging real-world problems. Note that if novel and powerful ES variants emerge in the future, we will expect to add them as soon as possible (reflecting the open nature of this book).
Self-adaptation is widely recognized as the essential feature of ES, which has resulted in (at least) one powerful ES version called Covariance Matrix Adaptation (CMA-ES).
CMA-ES could be used as a strong baseline on some BBO problems, such as offline design of biological sequences where "for TFbind8, which has a relatively small search space, CMA-ES gave pretty good performances", informative path planning where "CMA-ES finds a good trade-off between exploration and exploitation, resulting in the best overall performance among non-learning solvers", task constrained planner for robot manipulator in confined environment, reinforcement learning with human feedback (RLHF), if not the state-of-the-art. On many (though not all) hard BBO problems, CMA-ES has shown very competitive (and sometimes even state-of-the-art) performance (see the following section for some of its representative applications).
Surrogate-assisted CMA-ES, for e.g., computational chemistry.
synthesizability-constrained molecular design
- Exoskeleton assistance: A team from Stanford University (Nature, 2022).
- Flying robots: A joint team from Delft University of Technology and Aix Marseille Université (Nature, 2022).
- Abstract art + Evolving collective AI: Google Research, Brain Team + University of Tokyo (2023).
- Robotic caregivers: A joint team from Carnegie Mellon University and Google X (2023). [ IEEE-LRA, 2022 + CVPR, 2020 + IEEE-LRA, 2020 + Autonomous Robots, 2019 + Autonomous Robots, 2019 + ICRA, 2018 + IROS, 2019 + ICRA, 2017 + ICRA, 2017 ]
- Stochastic trajectory optimization for reactive robot: A joint team from Idiap Research Institute, Ecole Polytechnique Federale de Lausanne (EPFL), and University of Oxford (ICRA, 2023).
- Adversarial robustness in discontinuous spaces: A joint team from Stanford University, University of Pennsylvania, Carnegie Mellon University, and Bosch Center for AI (WACV, 2023).
- Target specific peptide design: A joint team from Carnegie Mellon University and Ohio State University (2023).
- Dexterous manipulation: A joint team from Tencent Robotics X, University of Edinburgh, and University College London (2023).
- Path synthesis: Carnegie Mellon University (Journal of Mechanical Design, 2023).
- Biogeochemical model optimization: University of California Los Angeles (Geoscientific Model Development, 2023).
- Muscle-driven miniature robots: A joint team from University of Illinois at Urbana-Champaign, Northwestern University, Massachusetts Institute of Technology, University of Houston, Dalian University of Technology, and University of Southern California (Science Robotics, 2023).
- Adaptable materials: A joint team from University of Chicago and Yale University (PNAS, 2023).
- Image quality optimization in augmented reality: U.S. Food and Drug Administration (IEEE-TMI, 2023).
- Combination treatment optimization: A joint team from Carnegie Mellon University, University of Pittsburgh, and Harvard Medical School + Strategy Robot, Inc., Optimized Markets, Inc., Strategic Machine, Inc. (PLOS Computational Biology, 2021).
- Atmospheric methane: A joint team from Harvard University, California Instituteof Technology, and Jet Propulsion Laboratory (PNAS, 2017).
- Optimization of an omnidirectional humanoid walk: University of Texas at Austin (AAAI, 2012 / [AAMAS 2011]): A winning approach at the RoboCup 2011 3D simulation competition.
- Robotics: [Klar et al., 2023, ACM-TOCHI], [Wang et al., 2023, ICRA], [Wochner et al., 2022, CoRL], [Kim&Oh, 2021, NeurIPS], [Thatte&Geyer, 2016, IEEE-TBME], [Song&Geyer, 2012, ICRA]
- Computer Vision: [Tian et al., 2023, CVPR], [Huang et al., 2022, CVPR]
- Graphics: [Lee et al., 2022, ACM-TOG], [Geijtenbeek et al., 2013, ACM-TOG], [Stoll et al., 2010, ACM-TOG], [Wang et al., 2009, ACM-TOG], [Wampler&Popović, 2009, ACM-TOG]
- Operations Research: [Jacquet, 2023, EJOR]
- Language Models: [Cao et al., 2023, NSR], [Shen et al., 2023]
- Noisy Intermediate-Scale Quantum: [Hu et al., 2023] from George Mason University, Kent State University, Oak Ridge National Laboratory
- Sometimes Evolutionary Strategies are used in the literature.
- We are very sorry that in this book we do not give the original German references of ES, since here we only consider English.
- Bäck, T.H., Kononova, A.V., van Stein, B., Wang, H., Antonov, K.A., Kalkreuth, R.T., de Nobel, J., Vermetten, D., de Winter, R. and Ye, F., 2023. Evolutionary algorithms for parameter optimization—Thirty years later. Evolutionary Computation, 31(2), pp.81-122.
- Lee, U.H., Shetty, V.S., Franks, P.W., Tan, J., Evangelopoulos, G., Ha, S. and Rouse, E.J., 2023. User preference optimization for control of ankle exoskeletons using sample efficient active learning. Science Robotics, 8(83), p.eadg3705.
- Thamm, M. and Rosenow, B., 2023. Machine learning optimization of Majorana hybrid nanowires. Physical Review Letters, 130(11), p.116202.
- Lin, X., Yang, Z., Zhang, X. and Zhang, Q., 2023, July. Continuation path learning for homotopy optimization. In International Conference on Machine Learning (pp. 21288-21311). PMLR.
- Antonova, R., Yang, J., Jatavallabhula, K.M. and Bohg, J., 2023, March. Rethinking optimization with differentiable simulation from a global perspective. In Conference on Robot Learning (pp. 276-286). PMLR.
- Lange, R.T., 2023, July. Evosax: Jax-based evolution strategies. In Proceedings of ACM Companion Conference on Genetic and Evolutionary Computation (pp. 659-662).
- Wanzenböck, R., Buchner, F., Kovács, P., Madsen, G.K. and Carrete, J., 2023. Clinamen2: Functional-style evolutionary optimization in Python for atomistic structure searches. Computer Physics Communications, p.109065.
- Van der Meersch, V. and Chuine, I., 2023. Estimating process‐based model parameters from species distribution data using the evolutionary algorithm CMA‐ES. Methods in Ecology and Evolution.
- Giani, T., Magni, G. and Rojo, J., 2023. SMEFiT: A flexible toolbox for global interpretations of particle physics data with effective field theories. European Physical Journal C, 83(5), p.393.
- Li, Q., Zhang, C. and Woodland, P.C., 2023. Combining hybrid DNN-HMM ASR systems with attention-based models using lattice rescoring. Speech Communication, 147, pp.12-21.
- Li, A.C., Macridin, A., Mrenna, S. and Spentzouris, P., 2023. Simulating scalar field theories on quantum computers with limited resources. Physical Review A, 107(3), p.032603.
- Bonet-Monroig, X., Wang, H., Vermetten, D., Senjean, B., Moussa, C., Bäck, T., Dunjko, V. and O'Brien, T.E., 2023. Performance comparison of optimization methods on variational quantum algorithms. Physical Review A, 107(3), p.032407.
- Chen, C., Kuvshinov, A., Kruglyakov, M., Munch, F. and Rigaud, R., 2023. Constraining the crustal and mantle conductivity structures beneath islands by a joint inversion of multi‐source magnetic transfer functions. Journal of Geophysical Research: Solid Earth, 128(1), p.e2022JB024106.
- Real, E., Chen, Y., Rossini, M., de Souza, C., Garg, M., Verghese, A., Firsching, M., Le, Q.V., Cubuk, E.D. and Park, D.H., 2023. AutoNumerics-Zero: Automated discovery of state-of-the-art mathematical functions. arXiv preprint arXiv:2312.08472.
- Shen, M., Ghosh, S., Sattigeri, P., Das, S., Bu, Y. and Wornell, G., 2023. Reliable gradient-free and likelihood-free prompt tuning. arXiv preprint arXiv:2305.00593.
- Ciarella, S., Chiappini, M., Boattini, E., Dijkstra, M. and Janssen, L.M., 2023. Dynamics of supercooled liquids from static averaged quantities using machine learning. Machine Learning: Science and Technology, 4(2), p.025010.
- Tjanaka, B., Fontaine, M.C., Lee, D.H., Kalkar, A. and Nikolaidis, S., 2023. Training diverse high-dimensional controllers by scaling covariance matrix adaptation map-annealing. IEEE Robotics and Automation Letters.
- Tiboni, G., Arndt, K. and Kyrki, V., 2023. DROPO: Sim-to-real transfer with offline domain randomization. Robotics and Autonomous Systems, 166, p.104432.
- Liu, W., Leahy, K., Serlin, Z. and Belta, C., 2023, May. Robust multi-agent coordination from catl+ specifications. In American Control Conference (pp. 3529-3534). IEEE.
- Soni, R., Harnack, D., Isermann, H., Fushimi, S., Kumar, S. and Kirchner, F., 2023, October. End-to-end reinforcement learning for torque based variable height hopping. In IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 7531-7538). IEEE.
- Thomaser, A., Vogt, M.E., Kononova, A.V. and Bäck, T., 2023, March. Transfer of multi-objectively tuned CMA-ES parameters to a vehicle dynamics problem. In International Conference on Evolutionary Multi-Criterion Optimization (pp. 546-560). Cham: Springer Nature Switzerland.
- Eichner, T., Hülsenbusch, T., Palmer, G. and Maier, A.R., 2023. Evolutionary optimization and long-term stabilization of a white-light seeded two-stage OPCPA seed laser. Optics Express, 31(22), pp.36915-36927.
- Hu, Z., Wolle, R., Tian, M., Guan, Q., Humble, T. and Jiang, W., 2023, September. Toward consistent high-fidelity quantum learning on unstable devices via efficient in-situ calibration. In IEEE International Conference on Quantum Computing and Engineering (pp. 848-858). IEEE.
- Oh, C., Hwang, H., Lee, H.Y., Lim, Y., Jung, G., Jung, J., Choi, H. and Song, K., 2023. BlackVIP: Black-box visual prompting for robust transfer learning. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 24224-24235).
- Irwin, B., Haber, E., Gal, R. and Ziv, A., 2023, July. Neural network accelerated implicit filtering: Integrating neural network surrogates with provably convergent derivative free optimization methods. In International Conference on Machine Learning (pp. 14376-14389). PMLR. [ "These benefits include NNAIF’s ability to minimize structured functions of several thousand variables much more rapidly than well-known alternatives, such as Covariance Matrix Adaptation Evolution Strategy (CMA-ES) and finite difference based variants of gradient descent (GD) and BFGS, as well as its namesake IF." ]
- Xie, D., Hu, P., Sun, X., Pirk, S., Zhang, J., Mech, R. and Kaufman, A.E., 2023. GAIT: Generating aesthetic indoor tours with deep reinforcement learning. In Proceedings of IEEE/CVF International Conference on Computer Vision (pp. 7409-7419). [ "GAIT-DrQ-v2 and GAITCURL performs generally better than CMA-ES, except the reward with CMA-ES is better than GAIT-DrQ-v2 in the scene of Apartment." ]
- De Croon, G.C., Dupeyroux, J.J., De Wagter, C., Chatterjee, A., Olejnik, D.A. and Ruffier, F., 2022. Accommodating unobservability to control flight attitude with optic flow. Nature, 610(7932), pp.485-490.
- Ollivier, Y., Arnold, L., Auger, A. and Hansen, N., 2017. Information-geometric optimization algorithms: A unifying picture via invariance principles. Journal of Machine Learning Research, 18(18), pp.1-65.
- Tsai, Y.Y., Xu, H., Ding, Z., Zhang, C., Johns, E. and Huang, B., 2021. Droid: Minimizing the reality gap using single-shot human demonstration. IEEE Robotics and Automation Letters, 6(2), pp.3168-3175.
- https://www.paulvicol.com/pdfs/ES-Single-Slides.pdf
- Beyer, H.G., 2023, July. What you always wanted to know about evolution strategies, but never dared to ask. In Proceedings of ACM Conference on Genetic and Evolutionary Computation Companion (pp. 878-894).
- Lange, R.T., Schaul, T., Chen, Y., Zahavy, T., Dalibard, V., Lu, C., Singh, S. and Flennerhag, S., 2022, September. Discovering evolution strategies via meta-black-box optimization. In International Conference on Learning Representations.
- Diouane, Y., Gratton, S. and Vicente, L.N., 2015. Globally convergent evolution strategies. Mathematical Programming, 152(1), pp.467-490.
- Schwefel, H.P., 2002. Deep insight from simple models of evolution. BioSystems, 64(1-3), pp.189-198.
- Schwefel, H.P., 1994. On the evolution of evolutionary computation. Computational Intelligence: Imitating Life, pp.116-124.
- Schwefel, H.P., 1993. Evolution and optimum seeking: The sixth generation. John Wiley & Sons, Inc..
- Schwefel, H.P., 1992. Natural evolution and collective optimum seeking. Computational Systems Analysis–Topics and Trends, pp.5-14.
- Schwefel, H.P., 1988. Evolutionary learning optimum-seeking on parallel computer architectures. In Systems Analysis and Simulation I (pp. 217-225). Springer, New York, NY.
- Schwefel, H.P., 1988. Collective intelligence in evolving systems. In Ecodynamics (pp. 95-100). Springer, Berlin, Heidelberg.
- Schwefel, H.P., 1984. Evolution strategies: A family of non-linear optimization techniques based on imitating some principles of organic evolution. Annals of Operations Research, 1(2), pp.165-167.
- Schwefel, H.P., 1981. Numerical optimization of computer models. John Wiley & Sons, Inc..
- Hansen, N., Arnold, D.V. and Auger, A., 2015. Evolution strategies. In Springer Handbook of Computational Intelligence (pp. 871-898). Springer, Berlin, Heidelberg.
- Bäck, T., Foussette, C. and Krause, P., 2013. Contemporary evolution strategies. Berlin: Springer.
- [2012] A Comparison of Global Search Algorithms for Continuous Black Box Optimization [EC]
- Beyer, H.G. and Schwefel, H.P., 2002. Evolution strategies–A comprehensive introduction. Natural Computing, 1(1), pp.3-52.
- [2000] Evolutionary algorithms in noisy environments - Theoretical issues and guidelines for practice [Beyer]
- Rechenberg, I., 2000. Case studies in evolutionary experimentation and computation. Computer Methods in Applied Mechanics and Engineering, 186(2-4), pp.125-140.
- Rechenberg, I., 1984. The evolution strategy. A mathematical model of darwinian evolution. In Synergetics—from Microscopic to Macroscopic Order (pp. 122-132). Springer, Berlin, Heidelberg.