Replies: 1 comment 2 replies
-
The compactor will delete blocks based on the retention period configured.
If a retention period in the compactor is configured, the usage in object storage should not greatly differentiate from day to day with same actives series ingestion after the retention period has been passed. The reasonable minimum is probably '1d' (one day). The idea of being able to delete based on size would require the compactor to actually know the current usage. Which is not currently the case. Making that alone is a challenge when it is considered that a tenant could be shared among multiple compactors. Add to that making it support any s3 compatible storage. But supposing we could do all, it's going be very surprising for a tenant, that has not changed its cortex ingestion usage, suddenly discover they cannot see older metrics they could see before due to a limit reached by other's tenant usage change. It breaks multi-tenancy. |
Beta Was this translation helpful? Give feedback.
-
I'm running Cortex to store multi-tenant metrics and I configured a S3-compatible storage system/back-end as long term storage so the compactor can compact these metrics and make them available for long retention or when I need to go -2h (or longer) back in time metrics.
The storage bucket size/limit is 500GB and I don't want to make it any bigger. Because of this limitation, I'm seeing a lot of these quota errors from that S3/storage system:
My question here is, how do I configure Cortex so the metrics/blocks stored in the S3/storage system is in FIFO model e.g. in the event compactor wants to upload 10GB of latest/newest blocks, it will replace the oldest 10GB of blocks in the same bucket? Upsizing the bucket size/limit isn't an option for me and I'm looking for solutions to get rid of these quota errors.
P/S: FWIW, I have another question opened which is related to compactor too but I think they both are not related to each other.
Beta Was this translation helpful? Give feedback.
All reactions