-
Notifications
You must be signed in to change notification settings - Fork 1
Description
Quick update - I implemented a hyperparameter optimisation approach and trained about a thousand models overnight on a GPU, with some pretty good results
the best model ended up with a prediction error rate of ~0.89%
ML should be able to crush this problem, and this was only trained on 100K training samples
so the scaling laws should apply, and if we scale the dataset up to 1M and then 10M samples, we should see ML performance improvements
the only bottleneck is that is takes about 35 hours to generate 1M samples from the simulator
I'm happy to kick off generating 1M samples overnight, which will give us a very solid experimental dataset
But some speed improvements would be very welcome instead of it taking 350 hours to get to 10M training samples