Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Encryption / handling of sensitive config data? #3

Open
ChrisHardie opened this issue Aug 9, 2023 · 5 comments
Open

Encryption / handling of sensitive config data? #3

ChrisHardie opened this issue Aug 9, 2023 · 5 comments
Labels
question Further information is requested

Comments

@ChrisHardie
Copy link

I'm new to OPNsense so maybe this is mitigated in some other way, but is it possible to encrypt the xml files before they are synced to the remote destination, assuming there's sensitive information contained in that file? Thank you.

@ndejong
Copy link
Collaborator

ndejong commented Aug 10, 2023

Good question

I've considered adding client side encryption functionality before being sent (over HTTPS) to the S3 provider, however what threat-model would this aim to solve?

The main S3 providers (ie AWS-S3, GCP-S3, Backblaze and DigitalOcean) all provide server-side encryption at rest options - and the transport to those providers is protected within HTTPS.

Some of these provide also customer key-management-systems (KMS) features that come with a deep and robust set of logging, audit and abuse detection around the KMS features.

And that's just the KMS - those same S3 providers provide similar logging, audit and abuse detection on S3 object stores as well.

The blunt answer is that if you are concerned about the safety of objects in your S3 then you probably need to spend more time configuring and building up the protections available to you with the various S3 service providers. When all implemented, then S3 object storage from the main providers is extremely robust...

However it certainly feels shallow to put it that way when getting S3, KMS, logging, audit, alerting all setup "right" represents significant time, learning and effort - I get it.

Putting aside the effort required to make S3 object storage from a tier-1-provider awesome, then the layers of protection are currently

  • encrypted in transit from OPNsense host to S3 provider by HTTPS
  • encrypted storage at the S3 provider (provided you choose an appropriate provider etc)
  • logging and audit of accesses to the keys required to access the S3 object (provided you set that up, eg CloudTrail)
  • logging and audit of accesses to the S3 objects (again more self responsibility in setting that up)

... and finally, the design of ConfigSync is to perform a one-way outbound sync to the S3 provider without being able to retrieve old config.xml files directly inside the OPNsense UI interface - this design means that a compromise of the OPNsense host (a bad thing) does not also lead to a compromise of S3 keys that has read permission to previous config.xml files - you are responsible for applying an appropriate read-only policy to those credentials but ConfigSync by design does not require any object read-access.

This means that users retrieving previous config.xml files need to use their user cloud-platform logins and interfaces to get to the S3 bucket, which in-turn causes all the S3 provider logging and audit alarms to go off when they are accessed etc.

So, is there a threat-model that client-side encryption would protect against?

Yes, in the case where an S3 bucket is inappropriately setup, or the cloud-provider S3 account is compromised, or the S3 provider is not a first-tier provider with at-rest encryption or the fancy logging, audit and alerting - I'd suggest a hapless engineer probably has bigger problems in those cases.

Open to further discussion

@ndejong ndejong added the question Further information is requested label Aug 10, 2023
@ChrisHardie
Copy link
Author

I appreciate your thoughtful response here. You're right that the threat-models where this extra bit of security would be useful are limited, though I always try to operate under the assumption that any remote server will eventually be compromised in a way that bypasses any claims about its owners or attackers being unable to decrypt what's stored there.

I guess I saw the option to encrypt settings exports in the core OPNsense UI, and that prompted me to wonder what, if any, encryption could happen automatically as a part of backup syncs as well. When I didn't see it mentioned anywhere, I thought I'd ask. I think it could be sufficient to add a note to the docs making it clear that users of this plugin are responsible for trusting and securing the storage destinations they use, and that they should understand what sensitive information is being exported as a part of that process. Maybe that's in the category of "duh" but at least it would come up in future searches like the one I did. :)

Thanks again.

@ndejong
Copy link
Collaborator

ndejong commented Aug 11, 2023

All good.

Following on because I'm sure what you describe is a common thought train.

Yes, the OPNsense interface does provide an option to encrypt the config.xml files when exporting directly from the OPNSense UI - however this is a different scenario and is likely a scenario where you are simply trying to backup the file locally.

In the scenario where you are exporting to a local filesystem you probably do not have available all the transport-encryption, storage-encryption, secret-key-management, access-logging, access-audit and access-control that a modern S3 bucket storage providers provide - yes, without those things you really do want to make sure the exported config.xml file is not floating around on your local systems in clear-text.

I can still see how a time-strapped engineer may recoil at the notion of putting together a well functioning S3, because yes it takes time, effort, understanding and cloud-providers like to charge for all that logging, audit, key-management.

Leaving this as a thought bubble then -

  • Might it be possible to implement the same openssl-wrapper that OPNsense does to encrypt config.xml files before they are exported? It would at least mean there is consistency with encrypted files from OPNSense.
  • If this feature was created it would mean the encryption secret is stored someplace on the OPNsense host which in turn means the threat-scenarios of that secret increase :(
  • What would the additional CPU load be like to encrypt every file just before they are uploaded, is that a concern?
  • OPNsense openssl-wrapper - opnsense/core/src/opnsense/mvc/app/library/OPNsense/Backup/Base.php#L45
  • What happens when/if OPNsense decide to update their encrypt() function, how would we notice this and follow

@ndejong ndejong pinned this issue Aug 11, 2023
@nhairs
Copy link

nhairs commented Aug 11, 2023

2c as I was asked to take a look at this.

Thoughts

Scenarios

I can think of a few scenarios where you might want to ensure that you have client side encryption:

The storage environment is shared and outside the control of the person administering OPNSense and that environment does not and will not be changed to provide appropriate controls.

In this case I would question why you are using this storage environment to backup your potentially sensitive config. Even having such an environment goes against most best practices of cloud providers and it would probably have more bang for buck to improve the management and security of the cloud whole cloud environment over trying to compensate inside it.

The transport to the storage provider is not secure (e.g. MinIO over HTTP)

At first this seems like a scenario we might want to cover, however taking a "secure defaults" "secure by design" approach, I think it would be better to reject using HTTP endpoints in the first place (potentially allow overriding this check with a big red warning). Again, probably better bang for buck setting up HTTPS rather than trying to compensate for the lack of it.

Server Side Encryption is not enforced

It is possible to setup a storage provider to not have encryption by default / not enforced (if relying on request to say what encryption to use), in which case encrypting the file beforehand can prevent leaks due to misconfiguration. However again, at this point you should be spending time to fix the configuration rather than a "belt and braces" approach.

Secure at all costs

I'm sure there may be some instances where the security of the OPNSense configuration needs extra protection to make sure it absolutely doesn't go walk about. However at this point I'd have the following questions:

  • Why are you using a cloud provider to store this data?
    • Especially true given how advanced your usage of KMS can become.
  • If you are self-hosting and that's why you don't trust the storage, then why are you self hosting over using a cloud provider?
  • Is your management of client side keys really better than that of a cloud providers?
    • Even many (mature) tools that allow client side encryption in this form use cloud KMS providers for secret management rather than local keys in the first place. The only exception would be a self-hosted option like HashiCorp Vault, but again can you manage it better than a cloud provider?

Setting up cloud environments is hard / expensive

I can still see how a time-strapped engineer may recoil at the notion of putting together a well functioning S3, because yes it takes time, effort, understanding and cloud-providers like to charge for all that logging, audit, key-management.

The blunt answer is that if you are concerned about the safety of objects in your S3 then you probably need to spend more time configuring and building up the protections available to you with the various S3 service providers. When all implemented, then S3 object storage from the main providers is extremely robust.

☝️ +1

It could be suggested that the tool could do some "preflight" checks to ensure that the back-end is configured appropriately - especially since the current implementation does not allow for injecting the ServerSideEncryption extra arguments. However I feel like at this point you are better off using dedicated tools to check your cloud environment is appropriately configured.

That said, over time this is potentially going to become less of an issue as cloud providers also move to secure defaults.

Amazon S3 now applies server-side encryption with Amazon S3 managed keys (SSE-S3) as the base level of encryption for every bucket in Amazon S3. Starting January 5, 2023, all new object uploads to Amazon S3 are automatically encrypted at no additional cost and with no impact on performance. SSE-S3, which uses 256-bit Advanced Encryption Standard (AES-256), is automatically applied to all new buckets and to any existing S3 bucket that doesn't already have default encryption configured. The automatic encryption status for S3 bucket default encryption configuration and for new object uploads is available in AWS CloudTrail logs, S3 Inventory, S3 Storage Lens, the Amazon S3 console, and as an additional Amazon S3 API response header in the AWS Command Line Interface (AWS CLI) and the AWS SDKs.
(link)

Leading a horse to water

This feels more like a documentation problem. Often times a user may ask for ThingA to solve their problem, not realising that ThingB is a much better solution. I think it is better to lead users towards better practices rather than support their (potentially) misconceived ideas. Documentation is key for this.

Suggestions

Whilst tools should be somewhat flexible to supporting a variety of use-cases, I also strongly believe that tools should encourage users towards better practices even if that means purposely restricting functionality.

Without a compelling use-case where client side encryption is the best solution, I don't think this feature should be supported.

That said I think we can do more to lead users of this tool towards better practices.

If it were me I'd make the following changes:

  • Add an encryption section briefly discussing what users should be doing to protect their config files. This potentially can be quite brief and link to the main cloud provider's documentation.
    • This should include a call out to why the client side encryption option doesn't exist in this tool.
    • Where it doesn't already exist, you could provide sample configuration (e.g. AWS Policies).
  • Prevent the use of HTTP URLs
    • I'd consider adding an override option here to support things like development environments (similar to how boto3 lets you turn off certificate validation), but make sure it comes with appropriate warnings.
      Naming is important here - e.g. Allow Insecure Connections which is more overt than Allow HTTP Connections.
  • Consider adding a dummy "client side encryption" config item to the settings that links to the documentation.

@nhairs
Copy link

nhairs commented Aug 30, 2023

So I've been thinking about this more, and whilst I stand by basically everything I've said above I'd like to add the following.

Whilst I believe that "experts" definitely should be leading / coercing users towards the better answers, I'm probably not an expert enough to confidently assert that "there is unlikely a scenario where application-level encryption is required above cloud native best practices".

I don't think it really changes my suggested changes above, but perhaps included in the documentation of "why application level encryption is provided", also include "if you believe it should be provided please comment on this issue with your use-case". It's not an emphatic "no", but at least reduces maintenance burden until there is a proven need for it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants