This demonstration provides a simple API to implement a Data Source, with mutual authentication via MTLS certificates, and role based authorisation based on information in the client certificate.
The server and clients require a private server Certificate Authority (CA), a private client CA, and server and client certificates in a certs/
folder. These can be generated using the certmaker.sh
script in the scripts
directory.
mkdir certs
cd certs/
sh ../scripts/certmaker.sh
cd ..
The script generates the server's certificate with the hostname of the local computer for normal HTTPS validation, as output from hostname
.
The included docker compose file will bring up
- the API resource server
- an Nginx proxy, which checks the client certificate is valid and passes it to the resource server using the same header as AWS ALB (
x-amzn-mtls-clientcert
).
To run the server, leave this command running:
docker compose up
In some environments and depending on your Docker configuration, you will need to run this command with sudo
:
sudo docker compose up
Any HTTP client which can present a client certificate, and check a server certificate against a private CA, can be used to fetch data.
curl
is used in this example, run on the same machine as the docker compose up
command. In the curl
commands, --cert
and --key
specify the client certificate, and --cacert
the CA which must be used to sign the server's certificate. The URL is formed using the output of hostname
.
To fetch the report, request the /api/v1/supply-voltage
with a period
query parameter, for which any value can be used.
When the report is requested with a client certificate which encodes the supply-voltage-reader
role's URL in the certificate's subject, the server knows that the client is a participant in the Trust Framework with the role that is allowed to use the data:
curl --cert certs/6-application-one-bundle.pem --key certs/6-application-one-key.pem \
--cacert certs/1-server-ca-cert.pem \
https://`hostname`:8010/api/v1/supply-voltage?period=2023-07
This returns
{"period":"2023-07","voltages":[232,249,231,232,249,236,247,240,238,243,233,233,234,243,238,249]}
However, if a client certificate is used which does not have this role, then an error is returned:
curl --cert certs/7-application-two-bundle.pem --key certs/7-application-two-key.pem \
--cacert certs/1-server-ca-cert.pem \
https://`hostname`:8010/api/v1/supply-voltage?period=2023-07
The resource server generates a 401 response with the body:
{"detail":"Client certificate does not include role
https://registry.estf.ib1.org/scheme/electricty/role/supply-voltage-reader"}
Finally, if the client presents a certificate which isn't signed by the client CA:
curl --cert certs/3-server-cert-bundle.pem --key certs/3-server-key.pem \
--cacert certs/1-server-ca-cert.pem \
https://`hostname`:8010/api/v1/info
Nginx will returns a 400 Bad Request
. The resource server will not receive the request, enforcing membership of the Trust Framework at the transport level.
<html>
<head><title>400 The SSL certificate error</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<center>The SSL certificate error</center>
...
The resource server also implements an info
API which returns information about the request and client certificate.
curl --cert certs/6-application-one-bundle.pem --key certs/6-application-one-key.pem \
--cacert certs/1-server-ca-cert.pem \
https://`hostname`:8010/api/v1/info
This script generates two CAs and chains of certificates. The certificate trees are documented at the top of the file.
Each of the two private CAs uses an intermediate certificate, which then signs the server or client certificates. This is so the key of the root CA certificate can be kept offline for security, enabling the root certificate to have a long lifetime. The intermediate Issuer certificates have a shorter lifetime, and their keys are kept online.
The script creates certificate bundle files which combine a certificate and its intermediate Issuer into a single file. These are used in server and client software so that the intermediate is sent with the leaf certificate, providing the entire certificate chain to the root. Without the entire chain, the certificate cannot be verified.
The intermediate Issuer certificate is used to sign short lived server certificates. In a real deployment, these will be automatically renewed and installed, for example with an ACME client. In the event of compromise, the Issuer intermediate certificates can be easily replaced.
So that certificates can be validated in the demo environment, the certificate uses the output of hostname
for the server's certificate.
For ease of management, the client certificates have a long lifetime. They would typically be issued by the Directory through an API or user interface.
Client certificate Subject names contain:
- the organisation name,
- the OAuth client ID in the Common Name (CN), which is the client's URL in the Directory.
Custom x509 certificate extensions with Icebreaker One OIDs are used to encode:
- The Application which is using the data, as a Directory URL, and
- the client's Roles, as one or more Registry URLs.
This configuration file:
- sets the server certificate using the combined certificate and issuer file,
- sets the client Certificate Authority to the private client CA, and requires that any client presents a valid certificate signed by this CA,
- sets the
x-amzn-mtls-clientcert
header to the client certificate presented, - and proxies the unencrypted request to the resource server.
By requiring and verifying the client certificate, Nginx ensures that only Trust Framework participants can make requests to the underlying resource server.
In a real deployment, the server would be configured to check for revocation of client certificates. The client does not need to check for revocation because the server certificates are replaced daily.
Data Service API is implemented by a minimal server. Because most of the authentication is delegated to the Nginx proxy, it does not need to do anything to ensure that only a Trust Framework participant can call API endpoints.
However, it needs to verify that only participants with the right role can access the API. The require_role()
method checks this by:
- Decoding the client certificate from the
x-amzn-mtls-clientcert
header. It can rely on this certificate being valid and signed by the private client CA. - Decoding the roles from the certificate's Subject attribute.
- Raising an exception if the expected role is not present in the certificate.
To make changes to the resource server, run the proxy and resource server with docker compose up
, as above. When you modify the Python files, the server will automatically be reloaded.
To set up a Python virtual environment, install the dependencies, and run the resource API tests, run:
cd resource
pipenv sync --dev
pipenv run python -m pytest