-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigating potential speedup(improving examples) #1
Comments
Hard to tell if the issue is the simple example being efficient...the real benefit of Ray(scoop in the past) and dealing with the network and future-spin-up overhead was to gain access to parallel for heavy work loads. For big data cases and heavy eval functions, esp in cases with large arrays and the ability to use ray shared memory objects, Id like to imagine the speedup would be much greater. Need to set up an example for this or adjust current examples to really go ham(change fitness calc to really expensive operations) to show off ability of batching via ray. This was tested with new manager update that did not use a remote manager(so one less worker to consume resources provided to ray) to fire off remote batched map actors. symbreg_ray.py
onemax_ray.py
|
I need to make better decisions with the example's intentions. Part of the problem: Arbitrarily increasing their complexity via loops just breaks the idea behind each example. Also my onemax is not very onemax as it prevents a 1 in first index of individual...this will confuse people when they look for meaningful examples to learn from, and especially so when comparing to the original examples. Symbreg is nearly ok, but it can have more meaningful slowdown(to show speedup on multi cpu) via adding big data. Then we can use ray shared memory to improve the example further without making it senseless. so todo: |
Updated the symbreg examples with heavier loads and shared memory items to better illustrate the speedups. Attempted to convert the onemax_island_scoop example but its recursion approach caused a ton of actors to generate. which broke things. Need to find a way to handle that... |
Testing the 2 examples, comparing to standard map vs 4 ray workers. Map is much faster with default example(as expected due to overhear?).
Arbitrability adding a loop to the eval function to ensure each individuals eval takes more time. Testing if batching out via ray ActorPool vs standard python map is improving things to ensure the implementation in ray_map.py is not inefficient.
The text was updated successfully, but these errors were encountered: