Golang implementation of the Quantum Inspired Optimizer developed by @ferwanguer.
You can find the original Python implementation here
Library for Quantum Inspired optimization in Go
GoQEA is an extensive research library for Quantum inspired hyper-normal based optimization in Golang.
It is intended for the solution of global optimization problems where conventional genetic algorithms or PSO yield sub-optimal results. The current implementation of the algorithm allows for a fast deployment of any optimization problem, regardless of the non-linearity of its constraints or the complexity of the cost function. The library has the following features:
- High level module for Quantum Inspired optimization ✔️
- Built-in set of objective cost functions to test the optimization algorithm ✔️
- Capacity to implement non-linear restrictions ❌
- Capcity to implement integral-only variables ❌
GoQEA provides a high level implementation of the proposed Quantum Inspired algorithm that allows a fast implementation and usage.
It aims to be user-friendly despite the non-trivial nature of its hyper-parameters. We now show the optimization process of a paraboloid (Sphere function)
of input dimension n
centered in the vector: [3.8, 3.8, 3.8, 3.8, ...]
.
The optimizer setup is as follows:
package main
import (
"GoQEA/goqea"
"log"
"time"
)
func main() {
// Use case parameters
const n_dims int = 10
var upper_bounds [n_dims]float64
var lower_bounds [n_dims]float64
for i := 0; i < n_dims; i++ {
upper_bounds[i] = 5.12
lower_bounds[i] = -5
}
var mu_scaler float64 = 20
var sigma_scaler float64 = 1.003
var elitist_level int = 6
var n_iterations int = 4000
var n_samples int = 200
// ------------------------------
qea := goqea.NewQuantumEvAlgorithm(n_dims, sigma_scaler, mu_scaler, elitist_level, upper_bounds[:], lower_bounds[:], goqea.F)
start := time.Now()
solution := qea.Training(n_iterations, n_samples)
elapsed := time.Since(start)
log.Printf("Took %s\n", elapsed)
log.Printf("Solution found: %v", solution)
}
The main limitation that the user may encounter in the use of this optimizer is the non-trivial character of it's hyper-parameters. The critical hyper-parameters are the ones that regulate the update of hyper-normal distribution after the evaluation of the sampled population. This is:
more information about the nature of this parameters, it's justification and experimental results is to be released in the future.
The recommended rule of thumb is the following:
mu_scaler ~ 20
(It is not as critical for performance)sigma_scaler ~ (1 + 1/(10*n))
beingn
the number of input dimensions of the problem
The key concept to bear in mind is that, as the dimensionality of the problem increases, it is necessary to make the algorithm more "cautious", therefore minimizing the difference between before and after distributions. In practical terms, as the complexity of a given problem increases, sigma_scaler must tend to ~1.
The Golang version has been implemented using pure Go with some features of Gonum library. Both versions have been compared using 8 different experiments, increasing the number of dimensions of the problem, that is the most influential parameter for the algorithm´s performance. You can check the detailed results in the Benchmark.xlsx
file.
The following image shows the mean time consumed for 5 executions of each experiment in both languages:
Go outperforms Python in all of them, from being x7.35 faster with a reduced number of dimension to reach an stabilization of x1.4 with a higher number. This results are shown in the following image: