This repository presents the reference implementation of a product data server as part of the product data network buildingenvelopedata.org. Before deploying this repository, machine can be used to set up the machine.
The API specification of the product data servers is available in the repository api. There is also a visualization of the API of a product data server.
This repository is deployed as the product data server of TestLab Solar Facades of Fraunhofer ISE.
If you have a question for which you don't find the answer in this repository,
please raise a new
issue and add
the tag question! All ways to contribute are presented by
CONTRIBUTING.md.
The basis for our collaboration is decribed by our Code of
Conduct.
-
Open your favorite shell, for example, good old Bourne Again SHell, aka,
bash, the somewhat newer Z shell, aka,zsh, or shiny newfish. -
Install Git by running
sudo apt install git-allon Debian-based distributions like Ubuntu, orsudo dnf install giton Fedora and closely-related RPM-Package-Manager-based distributions like CentOS. For further information see Installing Git. -
Clone the source code by running
git clone git@github.com:building-envelope-data/database.gitand navigate into the new directorydatabaseby runningcd ./database. -
Initialize, fetch, and checkout possibly-nested submodules by running
git submodule update --init --recursive. An alternative would have been passing--recurse-submodulestogit cloneabove. -
Prepare your environment by running
cp ./.env.sample ./.env,cp ./frontend/.env.local.sample ./frontend/.env.local, and adding the line127.0.0.1 local.solarbuildingenvelopes.comto your/etc/hostsfile. -
Install Docker Desktop, and GNU Make.
-
List all GNU Make targets by running
make help. -
Generate and trust a self-signed certificate authority and SSL certificates by running
make ssl. If you are locally working on the metabase and the database and if you need them to communicate over HTTPS, then instead of runningmake ssl, make theCERTIFICATE_AUTHORITY_*variable values in the.envfile match the ones from the metabase (these variables match in the.env.samplefiles), copy the certificate authority files from the directories./ssl,./backend/ssl, and./frontend/sslof the metabase project into the respective directories in the database project (if the repository reside alongside each other by runningmkdir ./ssl ./backend/ssl ./frontend/ssl && cp ../metabase/ssl/ca.* ./ssl && cp ../metabase/backend/ssl/ca.* ./backend/ssl && cp ../metabase/frontend/ssl/ca.* ./frontend/ssl), and run the commandmake generate-ssl-certificate. -
Generate JSON Web Token (JWT) encryption and signing certificates by running
make jwt-certificates. -
Generate and export a GnuPG key with the passphrase
${GNUPG_SECRET_SIGNING_KEY_PASSPHRASE}set in the./.envfile to the file./backend/src/gpg-keys/<KEY_FINGERPRINT>.gpgby runningmake PERSON=${name} COMMENT=${comment} EMAIL=${email} gpgwith your information filled in, for example,make NAME="Anna Smith" COMMENT=first EMAIL=anna.smith@fraunhofer.de gpg. Then copy the key's fingerprint which is output by the command and set it as the value of theGNUPG_SECRET_SIGNING_KEY_FINGERPRINTvariable in the./.envfile.Instead of using the GNU Make target
gpg, you may- install GnuPG as described on Download GnuPG or GnuPG Package Repositories,
- generate a GnuPG key by running
gpg --full-generate-keyand answering the prompts, - list the keys for which you have both a public and secret key by running
gpg --list-secret-keys --keyid-format=long, - identify and copy the long form of the fingerprint of the key you have just created,
- remember the key in the variable
fingerprintby runningfingerprint=<KEY_FINGERPRINT>with<KEY_FINGERPRINT>replaced by the copied fingerprint, - create the directory to save the secret key to if it does not exist yet
by running
mkdir --parents ./backend/src/gpg-keys, - export the armored secret key to the file
./backend/src/gpg-keys/${fingerprint}.gpgby runninggpg --armor --export-secret-keys ${fingerprint} > ./backend/src/gpg-keys/${fingerprint}.gpg, - Set the value of the variable
GNUPG_SECRET_SIGNING_KEY_FINGERPRINTin the./.envfile to the remembered fingerprint in your favorite editor.
-
Create the PostgreSQL database and schema by running
make createdb migrate. -
Start all services and follow their logs by running
make up logs. -
To see the web frontend navigate to
https://local.solarbuildingenvelopes.com:5051in your web browser, to see the GraphQL API navigate tohttps://local.solarbuildingenvelopes.com:5051/graphql/, and to see sent emails navigate tohttps://local.solarbuildingenvelopes.com:5051/email/. Note that the port is5051by default. If you set the variableHTTPS_PORTwithin the./.envto some other value though, you need to use that value instead within the URL.
In another shell
- Drop into
bashwith the working directory/app, which is mounted to the host's./backenddirectory, inside a fresh Docker container based on./backend/Dockerfileby runningmake shellb. If necessary, the Docker image is (re)built automatically, which takes a while the first time. Note that the Docker image and containers try to use the same user and group IDs as the ones on the host machine. This has the upside that files created within containers in mounted directories are owned by the host user. It has the downside that the Docker image may fail to build because the IDs may already be taken by other users and groups in the base image. This happens for example if you arerooton the host machine with the user and group IDs 0. If there is an ID collision, then you can either change the user and group ID on the host machine (for example by logging in as another user) or you can replace all occurrences ofshell id --groupandshell id --userinMakefileandMakefile.productionby fixed non-colliding IDs like 1000. If you know a better way, please let use know on GitHub. - List all backend GNU Make targets by running
make help. - For example, update packages and tools by running
make update. - Drop out of the container by running
exitor pressingCtrl-D.
The same works for frontend containers by running make shellf.
After changing the domain model in ./backend/src/data, you need to migrate
the database by dropping into make shellb, adding a migration with make NAME=${MIGRATION_NAME} migration, verifying and if necessary adapting the new
migration C# code and SQL scripts, exiting the container with exit, and
applying the new migration to the PostgreSQL database with make migrate. See
Migrations Overview
and the following pages for details.
On the very first usage:
- Install Visual Studio Code and open it.
Navigate to the Extensions pane (
Ctrl+Shift+X). Add the extension Remote Development. - Navigate to the
Remote Explorer
pane. Hover over the running
database-backend-*container (if it is not running, then runmake upin a shell inside the project directory) and click on the "Attach in Current Window" icon. In the Explorer pane, open the directory/app, which is mounted to the host's./backenddirectory. Navigate to the Extensions pane. Add the extensions C# Dev Kit, IntelliCode for C# Dev Kit, GraphQL: Language Feature Support, and GitLens — Git supercharged. - Navigate to the
Remote Explorer
pane. Hover over the running
database-frontend-*container and click on the "Attach in New Window" icon. In the Explorer pane, open the directory/app, which is mounted to the host's./frontenddirectory. Navigate to the Extensions pane. Add the extensions GraphQL: Language Feature Support, and GitLens — Git supercharged.
Note that the Docker containers are configured in ./docker-compose.yml in
such a way that Visual Studio Code extensions installed within containers are
retained in Docker volumes and thus remain installed across make down and
make up cycles.
On subsequent usages: Open Visual Studio Code, navigate to the "Remote Explorer" pane, and attach to the container(s) you want to work in.
The following Visual Studio Code docs may be of interest for productivity and debugging
To debug the
ASP.NET Core web application,
attach Visual Studio Code to the database-backend-* container,
press Ctrl+Shift+P, select "Debug: Attach to a .NET 5+ or .NET Core process",
and choose the process /app/src/bin/Debug/net9.0/Database run titled
Database or alternatively navigate to the "Run and Debug" pane
(Ctrl+Shift+D), select the launch profile ".NET Core Attach", press the
"Start Debugging" icon (F5), and select the same process as above. Then, for
example, open some source files to set breakpoints, navigate through the
website https://local.buildingenvelopedata.org:4041, which will stop at
breakpoints, and inspect the information provided by the debugger at the
breakpoints. For details on debugging C# in Visual Studio Code, see
Debugging.
Note that the debugger detaches after the
polling file watcher
restarts the process, which happens for example after editing a source file
because dotnet watch is configured in ./docker-compose.yml with
DOTNET_USE_POLLING_FILE_WATCHER set to true. As of this writing, there is
an
open feature request to reattach the debugger automatically.
There also are multiple extensions like
.NET Watch Attach
and
.NET Stalker Debugger
that attempt to solve that. Those extensions don't work in our case though, as
they try to restart dotnet watch themselves, instead of waiting for the
polling file watcher of dotnet watch to restart
/app/src/bin/Debug/net9.0/Database run and attach to that process.
After migrating the PostgreSQL database or changing the database schema
manually or upgrading Npgsql, the service backend may throw exceptions
regarding the object-relational mapping (Npgsql or EF Core). In that case it
may be necessary to restart the service backend, for example, by running
make down up and it may even be necessary recreate the database from scratch
by running make down remove-data up. Note that the latter will remove all
data from PostgreSQL, recreate the database and its schema, and seed it
freshly.
When your hard-disk starts to grow full, it may be the case that Docker does
not clean-up anonymous volumes properly. You can do so manually by running
docker system prune potentially with the arguments --volumes and/or
--all. Note that this may result in loss of data. It may also be the case
that the log files grew huge. You can delete them by running
rm ./backend/src/logs/*.
When the frontend Docker image does not build in production because of an
unused import in an automatically generated file, for example, one in the
directory ./frontend/__generated__, then temporarily ignore TypeScript
build errors by adding the following lines to ./frontend/next.config.js, for
example with vi or nano in a shell on the deployment machine:
typescript: {
ignoreBuildErrors: true,
},
The same can happen in development when running make build (or yarn run build) in the shell entered by make shellf. In that case, remove the
offending import manually in the file and try again, for example using tail
like so tail -n +5 ./__generated__/queries/... > x.tmp && mv x.tmp ... . Do
not disable build errors in development because when you do so, build errors in
non-generated files may leak into the code base.
For information on using Docker in production see Configure and troubleshoot the Docker daemon and the pages following it.
- Use the sibling project machine and its instructions for the first stage of the set-up.
- Enter a shell on the production machine using
ssh. - Change into the directory
/appby runningcd /app. - Clone the repository twice by running
for environment in staging production ; do git clone git@github.com:building-envelope-data/database.git ./${environment} done - For each of the two environments staging and production referred to by
${environment}below:-
Change into the clone
${environment}by runningcd /app/${environment}. -
Open
https://www.buildingenvelopedata.orgin your favorite web browser, log into your account, navigate to the institution operating this database (which you should be a representative of), add an OpenId Connect Application with- client ID and display name of your choice;
- consent type: explicit;
- endpoints: authorization, pushed authorization, introspection, end session, revocation, token;
- grant types: authorization code and refresh token;
- response types: code;
- scopes: address, email, phone, profile, roles, read:api, write:api;
- requirements: proof key for code exchange and pushed authorization requests;
- post logout redirect URI:
https://${HOST}/connect/callback/logout/metabase - redirect URI:
https://${HOST}/connect/callback/login/metabasewhere${HOST}is the domain name with sub-domain of the deployment, for example,staging.solarbuildingenvelopes.comorwww.solarbuildingenvelopes.comfor the product-data database of the TestLab Solar Facades.
-
Prepare the environment by running
cp ./.env.${environment}.sample ./.env,cp ./frontend/.env.local.sample ./frontend/.env.local, and by adjusting variable values in the copies to your needs, in particular, by setting passwords to newly generated ones, where random passwords may be generated by runningopenssl rand -base64 32. Here is some information on what the variables meanNAMEis the name Docker project name, in particular, it is the prefix of the Docker container names listed bydocker ps --all;HOSTis the domain name with sub-domain of the deployment, in particular, it is used to make resource locators absolute;HTTP_PORTis the HTTP port to which the reverse proxy NGINX forwards for HTTPS requests (seePRODUCTION_HTTP_PORTandSTAGING_HTTP_PORTin the.envfile of your clone of machine);METABASE_HOSTis the domain name with sub-domain of the metabase, in particular, to use it as OpenId Connect provider and to ask it for information about logged-in users needed for authorization;DATABASE_IDis the UUID that was assigned to this product-data database upon registering it at the metabase;OPERATOR_IDis the UUID of the institution that operates this product-data database;VERIFICATION_CODEis the verification code that was generated for this product-data database upon registering it at the metabase;OPEN_ID_CONNECT_CLIENT_IDandOPEN_ID_CONNECT_CLIENT_SECRETare the OpenId Connect client identifier and secret of this product-data database as a client of the metabase acting as identity provider (the client secret is given when registering an OpenId Connect client at the metabase);GNUPG_SECRET_SIGNING_KEY_FINGERPRINTandGNUPG_SECRET_SIGNING_KEY_PASSPHRASEare fingerprint and passphrase of the exported GnuPG secret key for signing in the file./backend/src/gpg-keys/${GNUPG_SECRET_SIGNING_KEY_FINGERPRINT}.gpg;JSON_WEB_TOKEN_ENCRYPTION_CERTIFICATE_PASSWORDandJSON_WEB_TOKEN_SIGNING_CERTIFICATE_PASSWORDare passwords used to encrypt and sign JSON web tokens (JWT) used by OpenId Connect;SMTP_HOSTandSMTP_PORTare host and port of the message transfer agent to be used to send emails through the Simple Mail Transfer Protocol (SMTP);RELAY_SMTP_HOST,RELAY_SMTP_PORT, andRELAY_ALLOWED_EMAILSare host and port of the message transfer agent and a list of allowed email addresses to send emails to even in the staging environment.
-
Generate JSON Web Token (JWT) encryption and signing certificates by running
make --file=Makefile.production jwt-certificates. -
Generate and export a GnuPG key with the passphrase
${GNUPG_SECRET_SIGNING_KEY_PASSPHRASE}set in the./.envfile to the file./backend/src/gpg-keys/<KEY_FINGERPRINT>.gpgby runningmake PERSON=${name} COMMENT=${comment} EMAIL=${email} gpgwith your information filled in, for example,make NAME="Anna Smith" COMMENT=first EMAIL=anna.smith@fraunhofer.de gpg. Then copy the key's fingerprint which is output by the command and set it as the value of theGNUPG_SECRET_SIGNING_KEY_FINGERPRINTvariable in the./.envfile.Instead of using the GNU Make target
gpg, you may- install GnuPG as described on Download GnuPG or GnuPG Package Repositories,
- generate a GnuPG key by running
gpg --full-generate-keyand answering the prompts, - list the keys for which you have both a public and secret key by running
gpg --list-secret-keys --keyid-format=long, - identify and copy the long form of the fingerprint of the key you have just created,
- remember the key in the variable
fingerprintby runningfingerprint=<KEY_FINGERPRINT>with<KEY_FINGERPRINT>replaced by the copied fingerprint, - create the directory to save the secret key to if it does not exist yet
by running
mkdir --parents ./backend/src/gpg-keys, - export the armored secret key to the file
./backend/src/gpg-keys/${fingerprint}.gpgby runninggpg --armor --export-secret-keys ${fingerprint} > ./backend/src/gpg-keys/${fingerprint}.gpg, - Set the value of the variable
GNUPG_SECRET_SIGNING_KEY_FINGERPRINTin the./.envfile to the remembered fingerprint in your favorite editor.
-
Prepare PostgreSQL by generating new password files by running
make --file=Makefile.production postgres_passwordsand creating the database by runningmake --file=Makefile.production createdb.
-
- Draft a new release with a new version according to
Semantic Versioning by running the GitHub action
Draft a new release
which, creates a new branch named
release/v*.*.*, creates a corresponding pull request, updates the Changelog, and bumps the version inpackage.json, where*.*.*is the version. Note that this is not the same as "Draft a new release" on Releases. - Fetch the release branch by running
git fetchand check it out by runninggit checkout release/v*.*.*, where*.*.*is the version. - Apply pending migrations with
make --file=Makefile.production migrate. - Make sure that all tests succeed and try out any new features manually.
- Publish the new release
by merging the release branch into
mainwhereby a new pull request frommainintodevelopis created that you need to merge to finish of.
- Enter a shell on the production machine using
ssh. - Back up the production database by running
make --directory /app/production --file=Makefile.production BACKUP_DIRECTORY=/app/production/backup backup. - Change to the staging environment by running
cd /app/staging. - Restore the staging database from the production backup by running
make --file=Makefile.production BACKUP_DIRECTORY=/app/production/backup restore. - Adapt the environment file
./.envif necessary by comparing it with the./.env.staging.samplefile of the release to be deployed. - Deploy the new release in the staging environment by running
make --file=Makefile.production TARGET=${TAG} deploy, where${TAG}is the release tag to be deployed, for example,v1.0.0. - If it fails after the database backup was made, rollback to the previous
state by running
make --file=Makefile.production rollback, figure out what went wrong, apply the necessary fixes to the codebase, create a new release, and try to deploy that release instead. - If it succeeds, deploy the new reverse proxy that handles sub-domains by
running
cd /app/machine && make deployand test whether everything works as expected and if that is the case, continue. Note that in the staging environment sent emails can be viewed in the web browser underhttps://staging.solarbuildingenvelopes.com/email/and emails to addresses in the variableRELAY_ALLOWED_EMAILSin the.envfile are delivered to the respective inboxes (the variable's value is a comma separated list of email addresses). - Change to the production environment by running
cd /app/production. - Adapt the environment file
./.envif necessary by comparing it with the./.env.production.samplefile of the release to be deployed. - Deploy the new release in the production environment by running
make --file=Makefile.production TARGET=${TAG} deploy, where${TAG}is the release tag to be deployed, for example,v1.0.0. - If it fails after the database backup was made, rollback to the previous
state by running
make --file=Makefile.production rollback, figure out what went wrong, apply the necessary fixes to the codebase, create a new release, and try to deploy that release instead.
The file Makefile.production contains GNU Make targets to manage Docker
containers like up and down, to follow Docker container logs with logs,
to drop into shells inside running Docker containers like shellb for the
backend service and shellf for the frontend service and psql for the
database service, and to list information about Docker like list and
list-services.
And the file contains GNU Make targets to deploy a new release or rollback it
back as mentioned above. These targets depend on several smaller targets like
begin-maintenance and end-maintenance to begin or end displaying
maintenance information to end users that try to interact with the website, and
backup to backup all data before deploying a new version, migrate to
migrate the database, and run-tests to run tests.
If for some reason the website displays the maintenance page without maintenance happening at the moment, then drop into a shell on the production machine, check all logs for information on what happened, fix issues if necessary, and end maintenance. It could for example happen that a cron job set-up by machine begins maintenance, fails to do its actual job, and does not end maintenance afterwards. Whether failing to do its job is a problem for the inner workings of the website needs to be decided by some developer. If it for example backing up the database fails because the machine is out of memory at the time of doing the backup, the website itself should still working.
If the database container restarts indefinitely and its logs say
PANIC: could not locate a valid checkpoint record
for example preceded by LOG: invalid resource manager ID in primary checkpoint record or LOG: invalid primary checkpoint record, then the database is
corrupt. For example, the write-ahead log (WAL) may be corrupt because the
database was not shut down cleanly. One solution is to restore the database
from a backup by running
make --file=Makefile.production BACKUP_DIRECTORY=/app/data/backups/20XX-XX-XX_XX_XX_XX/ restore
where the Xs need to be replaced by proper values. Another solution is to
reset the transaction log by entering the database container with
docker compose --file=docker-compose.production.yml --project-name database_production run database bash
and dry-running
gosu postgres pg_resetwal --dry-run /var/lib/postgresql/data
and, depending on the output, also running
gosu postgres pg_resetwal /var/lib/postgresql/data
Note that both solutions may cause data to be lost.
If one field in the SQL database needs to be updated and there is no GraphQL
mutation available, then you may update it in PostgreSQL directly as
illustrated in the following example. Test it in the staging environment
under /app/staging before doing it in production under /app/production.
- Drop into a shell on the server as user
cloudby runningssh -CvX -A cloud@IpAdressOfCloudServer. - Navigate to the production environment by running
cd /app/production. - Make a database backup by running
DATE=$(date +"%Y-%m-%d_%H_%M_%S")andmake --file=Makefile.production BACKUP_DIRECTORY=/app/data/backups/${DATE} backup - Navigate to the staging environment by running
cd /app/staging. - Load the backup into the staging database by running
make --file=Makefile.production BACKUP_DIRECTORY=/app/data/backups/${DATE} restore. - Drop into
psqlby runningmake --file=Makefile.production psql. - List all tables in the schema
databaseby running\dt database.*. - List all optical data records by running
select * from database.optical_data;and remember for example one identifier of a record that you want to update. - Update a single field by running
update database.optical_data set "Description" = '...' where "Id" = 'f07499ab-f119-471f-8aad-d3c016676bce';. - Delete a faulty record by running
delete from database.optical_data where "Id" = 'f07499ab-f119-471f-8aad-d3c016676bce';.