-
Notifications
You must be signed in to change notification settings - Fork 178
Description
Hi, this is a question, not an issue.
I have a bunch of features that I track over time. I am feeding them into
algo = rpt.Pelt(model=model, min_size=1, jump=1)
algo.fit(signal)
result = algo.predict(pen=p) # RESULT OF CHANGE POINT DETECTIONsignal here is (for example) a 500x16 (timepoints x features). The features themselves live on pretty different scales, such that I thought that some kind of scaling / normalization (for example via https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.scale.html#sklearn.preprocessing.scale) could make sense. Now I wonder though how different costs would be affected by that. In the example I am attaching below you can see the normalized signal for L1 and L2 norms -> change points are depicted with dashed lines. You can see that there are some obvious misses there (calibrating the penalty helps sometimes, but is a finicky process).
Should normalization be skipped altogether / is there a better alternative cost for these kind of signals?
