Memory usage seems to only grow until we get to an OOMkill event.
We lowered the autoscale percentage memory limit as it is computed on the average memory usage of all the currently pods, but this does not help the memory allocation spread out between all the service pods.
In case of a scale out event, I would expect a new pod to start (it does) and then the memory load to eventually even out between all the running pods.