-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simulated annealing #29
Comments
As I don't mind which specific elements are used it could benefit from ordering the vector of the probabilities and look for neighbours farther from the ordered vector. |
The search space is big and it might be driven by a few combinations. The more combinations possible less probable is that one is find that really drives the length: r <- c(0.5, 0.1, 0.3, 0.5, 0.25, 0.23)
v <- numeric(length(r)-1)
for (i in seq_len(length(r)-1)){
v[i] <- prod(1-r[1:i], r[(i+1):length(r)])
}
vv <- length_set(r)
vv
which.max(v)
which.max(vv) |
New theory I found on Quanta Magazine that might help renormalization (the link will probably won't help) but could be interesting if some readjusting of elements could speed up/change the scale of the calculations. On SE, for more pointers. On the same area, the Regularization is used. Conceptually it might be more appropriate from just a glance of the wikipedia page Also this issue is related to #33, using any other method will be faster as currently it is doing just brute force and end up with infinities or 0 as described on #30 |
When large number of combination is searched it might be suitable to use simulated annealing, it could be interesting to implement this.
It could be used to calculate the more probable size of each set if we want to avoid all the sizes. OR it could be used to calculate each size of the sets by just calculating some of them.
The text was updated successfully, but these errors were encountered: