Skip to content

Evaluate database storage parameters #954

Open
@KlaasH

Description

@KlaasH

On 3/8/24 the database ran out of space and became unavailable. I increased the allocation by hand, in the console, from 120GB to 140GB and enabled storage auto-scaling. That solved the immediate problem, but as follow-up, we should:

  • Check that the storage is being used how we expect. I.e. that the storage needs have grown due to the space needed by new analysis runs, not because there's cruft or temporary data being left around.
  • Decide what parameters we want, specifically
    • Whether to keep auto-scaling enabled
    • What max limit to set, if any
    • Whether to switch from gp2 to gp3 storage
  • Change parameters in the Terraform/tfvars to match those decisions

Note: if we decide to turn off storage auto-scaling, we should adjust the threshold on the CloudWatch alarm to make it go off at a higher value. If we decide to stick with auto-scaling, an alarm isn't as critical, though it would still make sense to have one that would go off before we hit the max limit.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions