Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simulated annealing #29

Open
llrs opened this issue Dec 11, 2019 · 3 comments
Open

Simulated annealing #29

llrs opened this issue Dec 11, 2019 · 3 comments

Comments

@llrs
Copy link
Member

llrs commented Dec 11, 2019

When large number of combination is searched it might be suitable to use simulated annealing, it could be interesting to implement this.

It could be used to calculate the more probable size of each set if we want to avoid all the sizes. OR it could be used to calculate each size of the sets by just calculating some of them.

@llrs
Copy link
Member Author

llrs commented Dec 11, 2019

As I don't mind which specific elements are used it could benefit from ordering the vector of the probabilities and look for neighbours farther from the ordered vector.

@llrs
Copy link
Member Author

llrs commented Dec 11, 2019

The search space is big and it might be driven by a few combinations. The more combinations possible less probable is that one is find that really drives the length:

r <- c(0.5, 0.1, 0.3, 0.5, 0.25, 0.23)
v <- numeric(length(r)-1)
for (i in seq_len(length(r)-1)){
    v[i] <- prod(1-r[1:i], r[(i+1):length(r)])
}
vv <- length_set(r)
vv
which.max(v)
which.max(vv)

@llrs
Copy link
Member Author

llrs commented Sep 17, 2020

New theory I found on Quanta Magazine that might help renormalization (the link will probably won't help) but could be interesting if some readjusting of elements could speed up/change the scale of the calculations.

On SE, for more pointers.

On the same area, the Regularization is used. Conceptually it might be more appropriate from just a glance of the wikipedia page

Also this issue is related to #33, using any other method will be faster as currently it is doing just brute force and end up with infinities or 0 as described on #30

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant