Implement a new REST resource plugin by extending
+ Drupal\rest\Plugin\ResourceBase and annotating it with @RestResource
+
Describe uri_paths, route_parameters and responses in the annotation as
+ detailed as possible to create a strong specification.
+
Install the REST UI module drush pm-enable restui
+
Enable and configure the new REST resource. It is important to use the
+ dpl_login_user_token authentication provider for all resources which will
+ be used by the frontend this will provide a library or user token by default.
+
Inspect the updated OpenAPI specification at /openapi/rest?_format=json to
+ ensure looks as intended
+
Run task ci:openapi:validate to validate the updated OpenAPI specification
+
Run task ci:openapi:download to download the updated OpenAPI specification
+
Uninstall the REST UI module drush pm-uninstall restui
+
Export the updated configuration drush config-export
+
Commit your changes including the updated configuration and openapi.json
Run drush config-import -y and see that the change is rolled back and the
+ configuration value is back to default. This shows that Core configuration
+ will remain managed by the configuration system.
Run drush config-import -y to see that no configuration is imported. This
+ shows that local configuration which can be managed by Editor libraries will
+ be left unchanged.
+
Enable and configure the Shortcut module and add a new Shortcut set.
+
Run drush config-import -y to see that the module is not disabled and the
+ configuration remains unchanged. This shows that local configuration in the
+ form of new modules added by Webmaster libraries will be left unchanged.
The module maintains a list of patterns for configuration which will be ignored
+during the configuration import process. This allows us to avoid updating local
+configuration.
+
By adding the wildcard * at the top of this list we choose an approach where
+all configuration is considered local by default.
+
Core configuration which should not be ignored can then be added to subsequent
+lines with the ~ which prefix. On a site these configuration entries will be
+updated to match what is in the core configuration.
+
Config Ignore also has the option of ignoring specific values within settings.
+This is relevant for settings such as system.site where we consider the site
+name local configuration but 404 page paths core configuration.
The Deconfig module allows developers
+to mark configuration entries as exempt from import/export. This would allow us
+to exempt configuration which can be managed by the library.
+
This does not handle configuration coming from new modules uploaded on webmaster
+sites. Since we cannot know which configuration entities such modules will
+provide and Deconfig has no concept of wildcards we cannot exempt the
+configuration from these modules. Their configuration will be removed again at
+deployment.
+
We could use partial imports through drush config-import --partial to not
+remove configuration which is not present in the configuration filesystem.
+
We prefer Config Ignore as it provides a single solution to handle the entire
+problem space.
The Config Ignore Auto module
+extends the Config Ignore module. Config Ignore Auto registers configuration
+changes and adds them to an ignore list. This way they are not overridden on
+future deployments.
+
The module is based on the assumption that if an user has access to a
+configuration form they should also be allowed to modify that configuration for
+their site.
+
This turns the approach from Config Ignore on its head. All configuration is now
+considered core until it is changed on the individual site.
+
We prefer Config Ignore as it only has local configuration which may vary
+between sites. With Config Ignore Auto we would have local configuration and
+the configuration of Config Ignore Auto.
+
Config Ignore Auto also have special handling of the core.extensions
+configuration which manages the set of installed modules. Since webmaster sites
+can have additional modules installed we would need workarounds to handle these.
Core developers will have to explicitly select new configuration to not ignore
+ during the development process. One can not simply run drush config-export
+ and have the appropriate configuration not ignored.
+
Because core.extension is ignored Core developers will have to explicitly
+ enable and uninstall modules through code as a part of the development
+ process.
We ended up with desicion no. 2 mentioned above. So we create a Drupal user upon
+login if it is not existing already.
+
We use the OpeOpenID Connect / OAuth client module
+to manage patron authentication and authorization. And we have developed a
+plugin for the module called: Adgangsplatformen which connects the external
+oauth service with dpl-cms.
+
Editors and administrators a.k.a normal Drupal users and does not require
+additional handling.
By having a Drupal user tied to the external user we can use that context and
+ make the server side rendering show different content according to the
+ authenticated user.
+
Adgangsplatform settings have to be configured in the plugin in order to work.
Instead of creating a new user for every single user logging in via
+Adgangsplatformen you could consider having just one Drupal user for all the
+external users. That would get rid of the UUID -> Drupal user id mapping that
+has been implemented as it is now. And it would prevent creation of a lot
+of users. The decision depends on if it is necessary to distinguish between the
+different users on a server side level.
The DPL React components needs to be integrated and available for rendering in
+Drupal. The components are depending on a library token and an access token
+being set in javascript.
We decided to download the components with composer and integrate them as Drupal
+libraries.
+
As described in adr-002-user-handling
+we are setting an access token in the user session when a user has been through
+a succesful login at Adgangsplatformen.
+
We decided that the library token is fetched by a cron job on a regular basis
+and saved in a KeyValueExpirable store which automatically expires the token
+when it is outdated.
+
The library token and the access token are set in javascript on the endpoint:
+/dpl-react/user.js. By loading the script asynchronically when mounting the
+components i javascript we are able to complete the rendering.
We have created a purger in the Drupal Varnish/Purge setup that is able to purge
+everything. The purger is being used in the deploy routine by the command:
+drush cache:rebuild-external -y
Everything will be invalidated on every deploy. Note: Although we are sending
+ a PURGE request we found out, by studing the vcl of Lagoon, that the PURGE
+ request actually is being translated into a BAN on req.url.
DPL CMS integrates with a range of other business systems through APIs. These
+APIs are called both clientside (from Browsers) and serverside (from
+Drupal/PHP).
+
Historically these systems have provided setups accessible from automated
+testing environments. Two factors make this approach problematic going forward:
+
+
In the future not all systems are guaranteed to provide such environments
+ with useful data.
+
Test systems have not been as stable as is necessary for automated testing.
+ Systems may be down or data updated which cause problems.
+
+
To address these problems and help achieve a goal of a high degree of test
+coverage the project needs a way to decouple from these external APIs during
+testing.
We use WireMock to mock API calls. Wiremock provides
+the following feature relevant to the project:
+
+
Wiremock is free open source software which can be deployed in development and
+ tests environment using Docker
+
Wiremock can run in HTTP(S) proxy mode. This allows us to run a single
+ instance and mock requests to all external APIs
+
We can use the wiremock-php
+ client library to instrument WireMock from PHP code. We modernized the
+ behat-wiremock-extension
+ to instrument with Behat tests which we use for integration testing.
Software for mocking API requests generally provide two approaches:
+
+
Instrumentation where an API can be used to define which responses will be
+ returned for what requests programmatically.
+
Record/replay where requests passing through are persisted (typically to the
+ filesystem) and can be modified and restored at a later point in time.
+
+
Generally record/replay makes it easy to setup a lot of mock data quickly.
+However, it can be hard to maintain these records as it is not obvious what part
+of the data is important for the test and the relationship between the
+individual tests and the corresponding data is hard to determine.
+
Consequently, this project prefers instrumentation.
DPL CMS provides HTTP end points which are consumed by the React components. We
+want to document these in an established structured format.
+
Documenting endpoints in a established structured format allows us to use tools
+to generate client code for these end points. This makes consumption easier and
+is a practice which is already used with other
+services
+in the React components.
+
Currently these end points expose business logic tied to configuration in the
+CMS. There might be a future where we also need to expose editorial content
+through APIs.
This is a technically manageable, standards compliant and performant solution
+which supports our initial use cases and can be expanded to support future
+needs.
GraphQL:
+ GraphQL is an approach which does not work well with Drupals HTTP based
+ caching layer. This is important for endpoints which are called many times
+ for each client.
+ Also from version 4.x and beyond the GraphQL Drupal module
+ provides no easy way for us to expose editorial content at a later point in time.
OpenAPI and OpenAPI REST are Drupal modules which have not seen updates for a
+ while. We have to apply patches to get them to work for us. Also they do not
+ support the latest version of the OpenAPI specification, 3.0. We risk
+ increased maintenance because of this.
DPL CMS employs functional testing to ensure the functional integrity of the
+project.
+
This is currently implemented using Behat
+which allows developers to instrument a browser navigating through different use
+cases using Gherkin, a
+business readable, domain specific language. Behat is used within the project
+based on experience using it from the previous generation of DPL CMS.
+
Several factors have caused us to reevaluate that decision:
Although Cypress supports intercepting requests to external systems
+ this only works for clientside requests. To maintain a consistent approach to
+ mocking both serverside and clientside requests to external systems we
+ integrate Cypress with Wiremock using a similar approach to what we have done
+ with Behat.
We will not only be able to test on mobile browsers as this is not supported
+ by Cypress. We prefer consistency across projects and expected improved
+ developer efficiency over what we expect to be improved complexity of
+ introducing a tool supporting this natively or expanding Cypress setup to
+ support mobile testing.
+
We opt not to use Gherkin to describe our test cases. The business has
+ decided that this has not provided sufficient value for the existing project
+ that the additional complexity is not needed. Cypress community plugins
+ support writing tests in Gherkin.
+ These could be used in the future.
DPL CMS is only intended to integrate with one external system:
+Adgangsplatformen. This integration is necessary to obtain patron and library
+tokens needed for authentication with other business systems. All these
+integrations should occur in the browser through React components.
+
The purpose of this is to avoid having data passing through the CMS as an
+intermediary. This way the CMS avoids storing or transmitting sensitive data.
+It may also improve performance.
+
In some situations it may be beneficiary to let the CMS access external systems
+to provide a better experience for business users e.g. by displaying options
+with understandable names instead of technical ids or validating data before it
+reaches end users.
Implementing React components to provide administrative controls in the CMS.
+ This would increase the complexity of implementing such controls and cause
+ implementors to not consider improvements to the business user experience.
We allow PHP client code generation for external services. These should not
+ only include APIs to be used with library tokens. This signals what APIs are
+ OK to be accessed server-side.
The current translation system for UI strings in DPL CMS is based solely on code
+deployment of .po files.
+
However DPL CMS is expected to be deployed in about 100 instances just to cover
+the Danish Public Library institutions. Making small changes to the UI texts in
+the codebase would require a new deployment for each of the instances.
+
Requiring code changes to update translations also makes it difficult for
+non-technical participants to manage the process themselves. They have to find a
+suitable tool to edit .po files and then pass the updated files to a
+developer.
+
This process could be optimized if:
+
+
Translations were provided by a central source
+
Translations could be managed directly by non-technical users
+
Distribution of translations is decoupled from deployment
We use POEditor to perform translations. POEditor is a
+translation management tool that supports .po files and integrates with
+GitHub. To detect new UI strings a GitHub Actions workflow scans the codebase
+for new strings and notifies POEditor. Here they can be translated by
+non-technical users. POEditor supports committing translations back to GitHub
+where they can be consumed by DPL CMS instances.
This approach has a number of benefits apart from addressing the original
+issue:
+
+
POEditor is a specialized tool to manage translations. It supports features
+ such as translation memory, glossaries and machine translation.
+
POEditor is web-based. Translators avoid having to find and install a suitable
+ tool to edit .po files.
+
POEditor is software-as-a-service. We do not need to maintain the translation
+ interface ourselves.
+
POEditor is free for open source projects. This means that we can use it
+ without having to pay for a license.
+
Code scanning means that new UI strings are automatically detected and
+ available for translation. We do not have to manually synchronize translation
+ files or ensure that UI strings are rendered by the system before they can be
+ translated. This can be complex when working with special cases, error
+ messages etc.
+
Translations are stored in version control. Managing state is complex and this
+ means that we have easy visibility into changes.
+
Translations are stored on GitHub. We can move away from POEditor at any time
+ and still have access to all translations.
+
We reuse existing systems instead of building our own.
+
+
A consequence of this approach is that developers have to write code that
+supports scanning. This is partly supported by the Drupal Code Standards. To
+support contexts developers also have to include these as a part of the t()
+function call e.g.
+
// Good
+$this->t('A string to be translated', [], ['context' => 'The context']);
+$this->t('Another string', [], ['context' => 'The context']);
+// Bad
+$c = ['context' => 'The context']
+$this->t('A string to be translated', [], $c);
+$this->t('Another string', [], $c);
+
+
We could consider writing a custom sniff or PHPStan rule to enforce this
Both projects can scan the codebase and generate a .po or .pot file with the
+translation strings and context.
+
At first it made most sense to go for Potx since it is used by
+localize.drupal.org and it has a long history.
+But Potx is extracting strings to a .pot file without having the possibility
+of filling in the existing translations. So we ended up using Potion which can
+fill in the existing strings.
+
A flip side using Potion is that it is not being maintained anymore. But it
+seems quite stable and a lot of work has been put into it. We could consider to
+back it ourselves.
Establishing our own localization server. This proved to be very complex.
+ Available solutions are either technically outdated
+ or still under heavy development.
+ None of them have integration with GitHub where our project is located.
+
Using a separate instance of DPL CMS in Lagoon as a central translation hub.
+ Such an instance would require maintenance and we would have to implement a
+ method for exposing translations to other instances.
The site is configured with the Varnish Purge module and configured with a
+cache-tags based purger that ensures that changes made to the site, is purged
+from Varnish instantly.
The following guidelines describe best practices for developing code for the DPL
+CMS project. The guidelines should help achieve:
+
+
A stable, secure and high quality foundation for building and maintaining
+ library websites
+
Consistency across multiple developers participating in the project
+
The best possible conditions for sharing modules between DPL CMS websites
+
The best possible conditions for the individual DPL CMS website to customize
+ configuration and appearance
+
+
Contributions to the core DPL CMS project will be reviewed by members of the
+Core team. These guidelines should inform contributors about what to expect in
+such a review. If a review comment cannot be traced back to one of these
+guidelines it indicates that the guidelines should be updated to ensure
+transparency.
The project follows the Drupal Coding Standards
+and best practices for all parts of the project: PHP, JavaScript and CSS. This
+makes the project recognizable for developers with experience from other Drupal
+projects. All developers are expected to make themselves familiar with these
+standards.
+
The following lists significant areas where the project either intentionally
+expands or deviates from the official standards or areas which developers should
+be especially aware of.
Code must be compatible with all currently available minor and major versions
+ of PHP from 8.0 and onwards. This is important when trying to ensure smooth
+ updates going forward. Note that this only applies to custom code.
+
Code must be compatible with Drupal Best Practices as defined by the
+ Drupal Coder module
+
Code must use types
+ to define function arguments, return values and class properties.
All functionality exposed through JavaScript should use the
+ Drupal JavaScript API
+ and must be attached to the page using Drupal behaviors.
+
All classes used for selectors in Javascript must be prefixed with js-.
+ Example: <div class="gallery js-gallery"> - .gallery must only be used in
+ CSS, js-gallery must only be used in JS.
+
Javascript should not affect classes that are not state-classes. State classes
+ such as is-active, has-child or similar are classes that can be used as
+ an interlink between JS and CSS.
Modules and themes should use SCSS (The project uses PostCSS
+ and PostCSS-SCSS). The Core system
+ will ensure that these are compiled to CSS files automatically as a part of
+ the development process.
Components (blocks) should be isolated from each other. We aim for an
+ atomic frontend
+ where components should be able to stand alone. In practice, there will be
+ times where this is impossible, or where components can technically stand
+ alone, but will not make sense from a design perspective (e.g. putting a
+ gallery in a sidebar).
+
Components should be technically isolated by having 1 component per scss file.
+ **As a general rule, you can have a file called gallery.scss which contains
+ .gallery, .gallery__container, .gallery__* and so on. Avoid referencing
+ other components when possible.
+
All components/mixins/similar must be documented with a code comment. When you
+ create a new component.scss, there must be a comment at the top, describing
+ the purpose of the component.
+
Avoid using auto-generated Drupal selectors such as .pane-content. Use
+ the Drupal theme system to write custom HTML and use precise, descriptive
+ class names. It is better to have several class names on the same element,
+ rather than reuse the same class name for several components.
+
All "magic" numbers must be documented. If you need to make something e.g.
+ 350 px, you preferably need to find the number using calculating from the
+ context ($layout-width * 0.60) or give it a descriptive variable name
+ ($side-bar-width // 350px works well with the current $layout-width_)
+
Avoid using the parent selector (.class &). The use of parent selector
+ results in complex deeply nested code which is very hard to maintain. There
+ are times where it makes sense, but for the most part it can and should be
+ avoided.
All modules written specifically for Ding3 must be prefixed with dpl.
+
The dpl prefix is not required for modules which provide functionality deemed
+ relevant outside the DPL community and are intended for publication on
+ Drupal.org.
Modules, themes etc. must be placed within the corresponding folder in this
+repository. If a module developed in relation to this project is of general
+purpose to the Drupal community it should be placed on Drupal.org and included
+as an external dependency.
+
A module must provide all required code and resources for it to work on its own
+or through dependencies. This includes all configuration, theming, CSS, images
+and JavaScript libraries.
+
All default configuration required for a module to function should be
+implemented using the Drupal configuration system and stored in the version
+control with the rest of the project source code.
If an existing module is expanded with updates to current functionality the
+default behavior must be the same as previous versions or as close to this as
+possible. This also includes new modules which replaces current modules.
+
If an update does not provide a way to reuse existing content and/or
+configuration then the decision on whether to include the update resides with
+the business.
Modules which alter or extend functionality provided by other modules should use
+appropriate methods for overriding these e.g. by implementing alter hooks or
+overriding dependencies.
All interface text in modules must be in English. Localization of such texts
+must be handled using the Drupal translation API.
+
All interface texts must be provided with a context. This supports separation
+between the same text used in different contexts. Unless explicitly stated
+otherwise the module machine name should be used as the context.
The project uses package managers to handle code which is developed outside of
+the Core project repository. Such code must not be committed to the Core project
+repository.
+
The project uses two package manages for this:
+
+
Composer - primarily for managing PHP packages,
+ Drupal modules and other code libraries which are executed at runtime in the
+ production environment.
+
Yarn - primarily for managing code needed to establish
+ the pipeline for managing frontend assets like linting, preprocessing and
+ optimization of JavaScript, CSS and images.
+
+
When specifying third party package versions the project follows these
+guidelines:
The project uses patches rather than forks to modify third party packages. This
+makes maintenance of modified packages easier and avoids a collection of forked
+repositories within the project.
+
+
Use an appropriate method for the corresponding package manager for managing
+ the patch.
+
Patches should be external by default. In rare cases it may be needed to
+ commit them as a part of the project.
+
When providing a patch you must document the origin of the patch e.g. through
+ an url in a commit comment or preferably in the package manager configuration
+ for the project.
Code may return null or an empty array for empty results but must throw
+exceptions for signalling errors.
+
When throwing an exception the exception must include a meaningful error message
+to make debugging easier. When rethrowing an exception then the original
+exception must be included to expose the full stack trace.
+
When handling an exception code must either log the exception and continue
+execution or (re)throw the exception - not both. This avoids duplicate log
+content.
Modules integrating with third party services must implement a Drupal setting
+for logging requests and responses and provide a way to enable and disable this
+at runtime using the administration interface. Sensitive information (such as
+passwords, CPR-numbers or the like) must be stripped or obfuscated in the logged
+data.
Code comments which describe what an implementation does should only be used
+for complex implementations usually consisting of multiple loops, conditional
+statements etc.
+
Inline code comments should focus on why an unusual implementation has been
+implemented the way it is. This may include references to such things as
+business requirements, odd system behavior or browser inconsistencies.
Commit messages in the version control system help all developers understand the
+current state of the code base, how it has evolved and the context of each
+change. This is especially important for a project which is expected to have a
+long lifetime.
+
Commit messages must follow these guidelines:
+
+
Each line must not be more than 72 characters long
+
The first line of your commit message (the subject) must contain a short
+ summary of the change. The subject should be kept around 50 characters long.
+
The subject must be followed by a blank line
+
Subsequent lines (the body) should explain what you have changed and why the
+ change is necessary. This provides context for other developers who have not
+ been part of the development process. The larger the change the more
+ description in the body is expected.
+
If the commit is a result of an issue in a public issue tracker,
+ platform.dandigbib.dk, then the subject must start with the issue number
+ followed by a colon (:). If the commit is a result of a private issue tracker
+ then the issue id must be kept in the commit body.
+
+
When creating a pull request the pull request description should not contain any
+information that is not already available in the commit messages.
The project aims to automate compliance checks as much as possible using static
+code analysis tools. This should make it easier for developers to check
+contributions before submitting them for review and thus make the review process
+easier.
In general all tools must be able to run locally. This allows developers to get
+quick feedback on their work.
+
Tools which provide automated fixes are preferred. This reduces the burden of
+keeping code compliant for developers.
+
Code which is to be exempt from these standards must be marked accordingly in
+the codebase - usually through inline comments (Eslint,
+PHP Codesniffer).
+This must also include a human readable reasoning. This ensures that deviations
+do not affect future analysis and the Core project should always pass through
+static analysis.
+
If there are discrepancies between the automated checks and the standards
+defined here then developers are encouraged to point this out so the automated
+checks or these standards can be updated accordingly.
Setting up a new site for testing certain scenarios can be repetitive. To avoid
+this the project provides a module: DPL Config Import. This module can be used
+to import configuration changes into the site and install/uninstall modules in a
+single step.
+
The configuration changes are described in a YAML file with configuration entry
+keys and values as well as module ids to install or uninstall.
The yaml file has two root elements configuration and modules.
+
A basic file looks like this:
+
configuration:
+# Add keys for configuration entries to set.
+# Values will be merged with existing values.
+system.site:
+# Configuration values can be set directly
+slogan:'ImportedbyDPLconfigimport'
+# Nested configuration is also supported
+page:
+# All values in nested configuration must have a key. This is required to
+# support numeric configuration keys.
+403:'/user/login'
+
+modules:
+# Add module ids to install or uninstall
+install:
+-menu_ui
+uninstall:
+-redis
+
There are multiple types of DPL CMS sites all using the same code base:
+
+
Developer (In Danish: Programmør) sites where the library is entirely free
+ to work with the codebase for DPL CMS as they please for their site
+
Webmaster sites where the library can install and
+ manage additional modules for their DPL CMS site
+
Editor (In Danish: Redaktør) sites where the library can configure their
+ site based on predefined configuration options provided by DPL CMS
+
Core sites which are default versions of DPL CMS used for development and
+ testing purposes
+
+
All these site types must support the following properties:
+
+
It must be possible for system administrators to deploy new versions of
+ DPL CMS which may include changes to the site configuration
+
It must be possible for libraries to configure their site based on the
+ options provided by their type site. This configuration must not be
+ overridden by new versions of DPL CMS.
This can be split into different types of configuration:
+
+
Core configuration: This is the configuration for the base installation of
+ DPL CMS which is shared across all sites. The configuration will be imported
+ on deployment to support central development of the system.
+
Local configuration: This is the local configuration for the individual
+ site. The level of configuration depends on the site type but no matter the
+ type this configuration must not be overridden on deployment of new versions
+ of DPL CMS.
NB: It is important that the official Drupal deployment procedure is followed.
+Database updates must be executed before configuration is imported. Otherwise
+we risk ending up in a situation where the configuration contains references
+to modules which are not enabled.
+
+
+
+
+
Creating update hooks for modules is only necessary once we have sites
+running in production which will not be reinstalled. Until then it is OK to
+enable/uninstall modules as normal and committing changes to core.extensions. ↩↩
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/dpl-cms/diagrams/add-or-update-translation.puml b/dpl-cms/diagrams/add-or-update-translation.puml
new file mode 100644
index 00000000..e574e9af
--- /dev/null
+++ b/dpl-cms/diagrams/add-or-update-translation.puml
@@ -0,0 +1,6 @@
+@startuml
+Translator -> Poeditor: Translates strings
+Translator -> Poeditor: Pushes button to export strings to GitHub
+Poeditor -> GitHub: Commits translations to GitHub (develop)
+DplCms -> GitHub: By manually requesting or by a cron job translations are imported to DPL CMS.
+@enduml
diff --git a/dpl-cms/diagrams/adgangsplatform-login.puml b/dpl-cms/diagrams/adgangsplatform-login.puml
new file mode 100644
index 00000000..0dfa5ce1
--- /dev/null
+++ b/dpl-cms/diagrams/adgangsplatform-login.puml
@@ -0,0 +1,56 @@
+@startuml
+actor User as user
+participant DPL_CMS as cms
+participant OpenidConnect as oc
+participant Adgangsplatformen as ap
+user -> cms: Click login into Adgangsplatformen
+cms -> oc: Get authorization url and return url
+oc -> oc: Gets urls and creates oauth state hash
+
+oc -> user: Tell browser to redirect to authorization url
+activate user
+note left
+The authorization url contains:
+* returnurl
+* state hash
+* agency id
+end note
+user -> ap: Redirect to external site using the full authorization url
+ap -> ap: Internal authentication
+
+ap -> user: Redirect to the return url
+deactivate user
+user -> cms: Send Adgangsplatform reponse
+cms -> oc: Validate values from the Adgangsplatform reponse
+
+oc -> ap: Request access token
+activate oc
+ap -> oc: Returning access token with expire time stamp
+oc -> ap: Requesting user info
+ap -> oc: Returning user info (UUID)
+deactivate oc
+
+alt First time login - the user is not in Drupal yet
+
+oc -> cms: Create user
+note left
+* Random unique email/username set on user.
+* The UUID from Adgangsplatformen is encrypted
+and used for mapping external user to the Drupal user.
+* The Drupal user gets the Drupal role: Patron.
+end note
+
+else Recurrent login - the user exists in Drupal
+
+oc -> cms: Update user
+note left
+Nothing is updated on the user
+end note
+
+end
+
+
+oc -> cms: Begin Drupal user session.
+oc -> cms: Saving access token in active user session
+oc -> user: Redirecting inlogged user to the frontpage
+@enduml
diff --git a/dpl-cms/diagrams/adgangsplatform-logout.puml b/dpl-cms/diagrams/adgangsplatform-logout.puml
new file mode 100644
index 00000000..8293592f
--- /dev/null
+++ b/dpl-cms/diagrams/adgangsplatform-logout.puml
@@ -0,0 +1,30 @@
+@startuml
+actor User as user
+participant DPL_CMS as cms
+participant Adgangsplatformen as ap
+user -> cms: Clicks logout
+group The user has an access token
+cms -> ap: Requests the single logout service at Adgangsplatformen\nThe access token is used in the request
+ap -> cms: Response to the cms
+
+cms -> cms: Logs user out by ending session
+note right
+The access token is a part of the user session
+and gets flushed in the procedure.
+end note
+cms -> user: Redirects to front page
+end
+
+group The user has no access token
+cms -> cms: Logs user out by ending session
+cms -> user: Redirects to front page
+end
+note left
+There can be two reasons why the user
+does not have an access token:
+* The user is a "non-adgangsplatformen" user, eg. an editor.
+* Something failed in the access token retrival
+and the access token is missing.
+end note
+
+@enduml
diff --git a/dpl-cms/diagrams/render-png/adgangsplatform-login.png b/dpl-cms/diagrams/render-png/adgangsplatform-login.png
new file mode 100644
index 00000000..b045fe3b
Binary files /dev/null and b/dpl-cms/diagrams/render-png/adgangsplatform-login.png differ
diff --git a/dpl-cms/diagrams/render-png/adgangsplatform-logout.png b/dpl-cms/diagrams/render-png/adgangsplatform-logout.png
new file mode 100644
index 00000000..4cece783
Binary files /dev/null and b/dpl-cms/diagrams/render-png/adgangsplatform-logout.png differ
diff --git a/dpl-cms/diagrams/render-svg/adgangsplatform-login.svg b/dpl-cms/diagrams/render-svg/adgangsplatform-login.svg
new file mode 100644
index 00000000..8eb693be
--- /dev/null
+++ b/dpl-cms/diagrams/render-svg/adgangsplatform-login.svg
@@ -0,0 +1,66 @@
+
\ No newline at end of file
diff --git a/dpl-cms/diagrams/render-svg/adgangsplatform-logout.svg b/dpl-cms/diagrams/render-svg/adgangsplatform-logout.svg
new file mode 100644
index 00000000..084ba9b4
--- /dev/null
+++ b/dpl-cms/diagrams/render-svg/adgangsplatform-logout.svg
@@ -0,0 +1,40 @@
+
\ No newline at end of file
diff --git a/dpl-cms/example-content/index.html b/dpl-cms/example-content/index.html
new file mode 100644
index 00000000..0fa1964c
--- /dev/null
+++ b/dpl-cms/example-content/index.html
@@ -0,0 +1,2092 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Example content - DDF/DPL Documentation
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Determine the UUIDs for the entities which should be exported as default
+ content. The easiest way to do this is to enable the Devel module, view
+ the entity and go to the Devel tab.
+
Add the UUID (s) (and if necessary entity types) to the
+ dpl_example_content.info.yml file
+
Export the entities by running drush default-content:export-module dpl_example_content
+
Commit the new files under web/modules/custom/dpl_example_content
The Markdown-files in this
+directory document the system as it. Eg. you can read about how to add a new
+entry to the core configuration.
+
The ./architecture folder contains our
+Architectural Decision Records
+that describes the reasoning behind key architecture decisions. Consult these
+records if you need background on some part of the CMS, or plan on making any
+modifications to the architecture.
+
As for the remaining files and directories
+
+
./diagrams contains diagram files like draw.io or PlantUML and
+ rendered diagrams in png/svg format. See the section for details.
+
./images this is just plain images used by documentation files.
We strive to keep the diagrams and illustrations used in the documentation as
+maintainable as possible. A big part of this is our use of programmatic
+diagramming via PlantUML and Open Source based
+manual diagramming via diagrams.net (formerly
+known as draw.io).
+
When a change has been made to a *.puml or *.drawio file, you should
+re-render the diagrams using the command task render and commit the result.
One such type of environment is pull request environments.
+These environments are automatically created when a developer creates a pull
+request with a change against the project and allows developers and project
+owners to test the result before the change is accepted.
Create a pull request for the change on GitHub. The pull request must be
+ created from a branch in the same repository as the target branch.
+
Wait for GitHub Actions related to Lagoon deployment to complete. Note: This
+ deployment process can take a while. Be patient.
+
A link to the deployed environment is available in the section between pull
+ request activity and Actions
+
The environment is deleted when the pull request is closed
+
+
Access the administration interface for a pull request environment¶
+
Accessing the administration interface for a pull request environment may be
+needed to test certain functionalities. This can be achieved in two ways:
The .lagoon.yml has an environments section where it is possible to control
+various settings.
+On root level you specify the environment you want to address (eg.: main).
+And on the sub level of that you can define the cron settings.
+The cron settings for the main branch looks (in the moment of this writing)
+like this:
If you want to have cron running on a pull request environment, you have to
+make a similar block under the environment name of the PR.
+Example: In case you would have a PR with the number #135 it would look
+like this:
This way of making sure cronb is running in the PR environments is
+a bit tedious but it follows the way Lagoon is handling it.
+A suggested workflow with it could be:
+
+
Create PR with code changes as normally
+
Write the .lagoon.yml configuration block connected to the current PR #
+
When the PR has been approved you delete the configuration block again
Copy database from Lagoon environment to local setup¶
+
Prerequisites:
+
+
Login credentials to the Lagoon UI, or an existing database dump
+
+
The following describes how to first fetch a database-dump and then import the
+dump into a running local environment. Be aware that this only gives you the
+database, not any files from the site.
+
+
To retrieve a database-dump from a running site, consult the
+ "How do I download a database dump?"
+ guide in the official Lagoon. Skip this step if you already have a
+ database-dump.
+
Place the dump in the database-dump directory, be aware
+ that the directory is only allowed to contain a single .sql file.
+
Start a local environment using task dev:reset
+
Import the database by running task dev:restore:database
+
+
Copy files from Lagoon environment to local setup¶
+
Prerequisites:
+
+
Login credentials to the Lagoon UI, or an existing nginx files dump
+
+
The following describes how to first fetch a files backup package
+and then replace the files in a local environment.
+
If you need to get new backup files from the remote site:
+
+
+
Login to the lagoon administration and navigate to the project/environment.
+
Select the backup tab:
+
+
+
+
Retrieve the files backup you need:
+
+
+4. Due to a UI bug you need to RELOAD the window and then it should be possible
+ to download the nginx package.
+
+
+
Replace files locally:
+
+
Place the files dump in the files-backup directory, be aware
+ that the directory is only allowed to contain a single .tar.gz file.
+
Start a local environment using task dev:reset
+
Restore the filesš by running task dev:restore:files
+
+
Get a specific release of dpl-react - without using composer install¶
+
In a development context it is not very handy only
+to be able to get the latest version of the main branch of dpl-react.
+
So a command has been implemented that downloads the specific version
+of the assets and overwrites the existing library.
+
You need to specify which branch you need to get the assets from.
+The latest HEAD of the given branch is automatically build by Github actions
+so you just need to specify the branch you want.
We manage translations as a part of the codebase using .po translation files.
+Consequently translations must be part of either official or local translations
+to take effect on the individual site.
+
DPL CMS is configured to use English as the master language but is configured
+to use Danish for all users through language negotiation. This allows us to
+follow a process where English is the default for the codebase but actual usage
+of the system is in Danish.
The following diagram show how these systems interact to support the flow of
+from introducing a new translateable string in the codebase to DPL CMS consuming
+an updated translation with said string.
+
case
+
sequenceDiagram
+ Actor Translator
+ Actor Developer
+ Developer ->> Developer: Open pull request with new translatable string
+ Developer ->> GitHubActions: Merge pull request into develop
+ GitHubActions ->> GitHubActions: Scan codebase and write strings to .po file
+ GitHubActions ->> GitHubActions: Fill .po file with existing translations
+ GitHubActions ->> GitHub: Commit .po file with updated strings
+ GitHubActions ->> Poeditor: Call webhook
+ Poeditor ->> GitHub: Fetch updated .po file
+ Poeditor ->> Poeditor: Synchronize translations with latest strings and translations
+ Translator ->> Poeditor: Translate strings
+ Translator ->> Poeditor: Export strings to GitHub
+ Poeditor ->> GitHub: Commit .po file with updated translations to develop
+ DplCms ->> GitHub: Fetch .po file with latest translations
+ DplCms ->> DplCms: Import updated translations
We decided to implement skeleton screens when loading data. The skeleton screens
+are rendered in pure css.
+The css classes are coming from the library: skeleton-screen-css
The library is very small and based on simple css rules, so we could have
+considered replicating it in our own design system or make something similar.
+But by using the open source library we are ensured, to a certain extent,
+that the code is being maintained, corrected and evolves as time goes by.
+
We could also have chosen to use images or GIF's to render the screens.
+But by using the simple toolbox of skeleton-screen-css we should be able
+to make screens for all the different use cases in the different apps.
It is now possible, with a limited amount of work, to construct skeleton screens
+in the loading state of the various user interfaces.
+
Because we use library where skeletons are implemented purely in CSS
+we also provide a solution which can be consumed in any technology
+already using the design system without any additional dependencies,
+client side or server side.
+
BEM rules when using Skeleton Screen Classes in dpl-design-system¶
+
Because we want to use existing styling setup in conjunction
+with the Skeleton Screen Classes we sometimes need to ignore the existing
+BEM rules that we normally comply to.
+See eg. the search result styling.
DPL Design System is a library of UI components that should be used as a common
+base system for "Danmarks Biblioteker" / "Det Digitale Folkebibliotek". The
+design is implemented
+with Storybook
+/ React and is output with HTML markup and css-classes
+through an addon in Storybook.
+
The codebase follows the naming that designers have used in Figma closely
+to ensure consistency.
Consequently, if you want to use the design system as an NPM package or if you
+use a project that depends on the design system as an NPM package you must
+authenticate:
The project is automatically built and deployed
+on pushes to every branch and every tag and the result is available as releases
+which support both types of usage. This applies for the original
+repository on GitHub and all GitHub forks.
+
You can follow the status of deployments in the Actions list for the repository
+on GitHub.
+The action logs also contain additional details regarding the contents and
+publication of each release. If using a fork then deployment actions can be
+seen on the corresponding list.
+
In general consuming projects should prefer tagged releases as they are stable
+proper releases.
+
During development where the design system is being updated in parallel with
+the implementation of a consuming project it may be advantageous to use a
+release tagging a branch.
The project automatically creates a release for each branch.
+
Example: Pushing a commit to a new branch feature/reservation-modal will
+create the following parts:
+
+
A git tag for the commit release-feature/reservation-modal. A tag is needed
+ to create a GitHub release.
+
A GitHub release for the called feature/reservation-modal. The build is
+ attached here.
+
A package in the local npm repository tagged feature-reservation-modal.
+ Special characters like / are not supported by npm tags and are converted
+ to -.
+
+
Updating the branch will update all parts accordingly.
This will update your package.json and lock files accordingly. Note that
+branch releases use temporary versions in the format 0.0.0-[GIT-SHA] and you
+may in practice see these referenced in both files.
+
If you push new code to the branch you have to update the version used in the
+consuming project:
Find the release for the branch on the releases page on GitHub
+and download the dist.zip file from there and use it as needed in the
+consuming project.
+
If your branch belongs to a fork then you can find the release on the releases
+page for the fork.
+
Repeat the process if you push new code to the branch.
We are using Chromatic for visual test. You can access the dashboard
+under the danskernesdigitalebibliotek (organisation) dpl-design-system
+(project).
Storybook is an
+open source tool for building UI components and pages in isolation from your
+app's business logic, data, and context. Storybook helps you document components
+for reuse and automatically visually test your components to prevent bugs. It
+promotes the component-driven process and agile
+development.
+
It is possible to extend Storybook with an ecosystem of addons that help you do
+things like fine-tune responsive layouts or verify accessibility.
The Storybook interface is simple and intuitive to use. Browse the project's
+stories now by navigating to them in the sidebar.
+
The stories are placed in a flat structure, where developers should not spend
+time thinking of structure, since we want to keep all parts of the system under
+a heading called Library. This Library is then dividid in folders where common
+parts are kept together.
+
To expose to the user how we think these parts stitch together for example
+for the new website, we have a heading called Blocks, to resemble what cms
+blocks a user can expect to find when building pages in the choosen CMS.
+
This could replicate in to mobile applications, newsletters etc. all pulling
+parts from the Library.
+
Each story has a corresponding .stories file. View their code in the
+src/stories directory to learn how they work.
+The stories file is used to add the component to the Storybook interface via
+the title. Start the title with "Library" or "Blocks" and use / to
+divide into folders fx. Library / Buttons / Button
There are many other helpful addons to customise the usage and experience.
+Additional addons used for this project:
+
+
+
HTML / storybook-addon-html:
+ This addon is used to display compiled HTML markup for each story and make it
+ easier for developers to grab the code. Because we are developing with React,
+ it is necessary to be able to show the HTML markup with the css-classes to
+ make it easier for other developers that will implement it in the future.
+ If a story has controls the HTML markup changes according to the controls that
+ are set.
+
+
+
Designs / storybook-addon-designs:
+ This addon is used to embed Figma in the addon panel for a better
+ design-development workflow.
+
+
+
A11Y:
+ This addon is used to check the accessibility of the components.
+
+
+
All the addons can be found in storybook/main.js directory.
To display some components (fx Colors, Spacing) in a more presentable way, we
+are using some "internal" css-classes which can be found in the
+styles/internal.scss file. All css-classes with "internal" in the front should
+therefore be ignored in the HTML markup.
The Danish Libraries needed a platform for hosting a large number of Drupal
+installations. As it was unclear exactly how to build such a platform and how
+best to fulfill a number of requirements, a Proof Of Concept project was
+initiated to determine whether to use an existing solution or build a platform
+from scratch.
When using and integrating with Lagoon we should strive to
+
+
Make as little modifications to Lagoon as possible
+
Whenever possible, use the defaults, recommendations and best practices
+ documented on eg. docs.lagoon.sh
+
+
We do this to keep true to the initial thought behind choosing Lagoon as a
+platform that gives us a lot of functionality for a (comparatively) small
+investment.
The main alternative that was evaluated was to build a platform from scratch.
+While this may have lead to a more customized solution that more closely matched
+any requirements the libraries may have, it also required a very large
+investment would require a large ongoing investment to keep the platform
+maintained and updated.
+
We could also choose to fork Lagoon, and start making heavy modifications to the
+platform to end up with a solution customized for our needs. The downsides of
+this approach has already been outlined.
+The platform is required to be able to handle an estimated 275.000 page-views
+per day spread out over 100 websites. A visit to a website that causes the
+browser to request a single html-document followed by a number of assets is only
+counted as a single page-view.
+
On a given day about half of the page-views to be made by an authenticated
+user. We further more expect the busiest site receive about 12% of the traffic.
+
Given these numbers, we can make some estimates of the expected average load.
+To stay fairly conservative we will still assume that about 50% of the traffic
+is anonymous and can thus be cached by Varnish, but we will assume that all
+all sites gets traffic as if they where the most busy site on the platform (12%).
+
12% of 275.000 requests gives us a an average of 33.000 requests. To keep to the
+conservative side, we concentrate the load to a period of 8 hours. We then end
+up with roughly 1 page-view pr. second.
The platform is hosted on a Kubernetes cluster on which Lagoon is installed.
+As of late 2021, Lagoons approach to handling rightsizing of PHP- and in
+particular Drupal-applications is based on a number factors:
+
+
Web workloads are extremely spiky. While a site looks to have to have a
+ sustained load of 5 rps when looking from afar, it will in fact have anything
+ from (eg) 0 to 20 simultaneous users on a given second.
+
Resource-requirements are every ephemeral. Even though a request as a peak
+ memory-usage of 128MB, it only requires that amount of memory for a very
+ short period of time.
+
Kubernetes nodes has a limit of how many pods will fit on a given node.
+ This will constraint the scheduler from scheduling too many pods to a node
+ even if the workloads has declared a very low resource request.
+
With metrics-server enabled, Kubernetes will keep track of the actual
+ available resources on a given node. So, a node with eg. 4GB ram, hosting
+ workloads with a requested resource allocation of 1GB, but actually taking
+ up 3.8GB of ram, will not get scheduled another pod as long as there are
+ other nodes in the cluster that has more resources available.
+
+
The consequence of the above is that we can pack a lot more workload onto a single
+node than what would be expected if you only look at the theoretical maximum
+resource requirements.
Combining these numbers we can see that a site that is scheduled as if it only
+uses 20 Megabytes of memory, can in fact take up to 20 Gigabytes. The main thing
+that keeps this from happening in practice is a a combination of the above
+assumptions. No node will have more than a limited number of pods, and on a
+given second, no site will have nearly as many inbound requests as it could have.
Lagoon is a very large and complex solution. Any modification to Lagoon will
+need to be well tested, and maintained going forward. With this in mind, we
+should always strive to use Lagoon as it is, unless the alternative is too
+costly or problematic.
+
Based on real-live operations feedback from Amazee (creators of Lagoon) and the
+context outline above we will
+
+
Leave the Lagoon defaults as they are, meaning most pods will request 10Mi of
+ memory.
+
Let the scheduler be informed by runtime metrics instead of up front pod
+ resource requests.
+
Rely on the node maximum pods to provide some horizontal spread of pods.
We've inspected the process Lagoon uses to deploy workloads, and determined that
+it would be possible to alter the defaults used without too many modifications.
+
The build-deploy-docker-compose.sh script that renders the manifests that
+describes a sites workloads via Helm includes a
+service-specific values-file. This file can be used to
+modify the defaults for the Helm chart. By creating custom container-image for
+the build-process based on the upstream Lagoon build image, we can deliver our
+own version of this image.
+
As an example, the following Dockerfile will add a custom values file for the
+redis service.
Building upon the modification described in the previous chapter, we could go
+even further and modify the build-script itself. By inspecting project variables
+we could have the build-script pass in eg. a configurable value for
+replicaCount for a pod. This would allow us to introduce a
+small/medium/large concept for sites. This could be taken even further to eg.
+introduce whole new services into Lagoon.
This could lead to problems for sites that requires a lot of resources, but
+given the expected average load, we do not expect this to be a problem even if
+a site receives an order of magnitude more traffic than the average.
+
The approach to rightsizing may also be a bad fit if we see a high concentration
+of "non-spiky" workloads. We know for instance that Redis and in particular
+Varnish is likely to use a close to constant amount of memory. Should a lot of
+Redis and Varnish pods end up on the same node, evictions are very likely to
+occur.
+
The best way to handle these potential situations is to be knowledgeable about
+how to operate Kubernetes and Lagoon, and to monitor the workloads as they are
+in use.
We have tried to install alertmanager and testing it.
+ It works and given the various possibilities of defining
+ alert rules we consider
+ the demands to be fulfilled.
+
We will be able to get alerts regarding thresholds on both container and
+ cluster level which is what we need.
+
Alertmanager fits in the general focus of being cloud agnostic. It is
+ CNCF approved
+ and does not have any external infrastructure dependencies.
Lagoon requires a site to be deployed from at Git repository containing a
+.lagoon.yml and docker-compose.yml
+A potential logical consequence of this is that we require a Git repository pr
+site we want to deploy, and that we require that repository to maintain those
+two files.
+
Administering the creation and maintenance of 100+ Git repositories can not be
+done manually with risk of inconsistency and errors. The industry best practice
+for administering large-scale infrastructure is to follow a declarative
+Infrastructure As Code(IoC)
+pattern. By keeping the approach declarative it is much easier for automation
+to reason about the intended state of the system.
+
Further more, in a standard Lagoon setup, Lagoon is connected to the "live"
+application repository that contains the source-code you wish to deploy. In this
+approach Lagoon will just deploy whatever the HEAD of a given branch points. In
+our case, we perform the build of a sites source release separate from deploying
+which means the sites repository needs to be updated with a release-version
+whenever we wish it to be updated. This is not a problem for a small number of
+sites - you can just update the repository directly - but for a large set of
+sites that you may wish to administer in bulk - keeping track of which version
+is used where becomes a challenge. This is yet another good case for declarative
+configuration: Instead of modifying individual repositories by hand to deploy,
+we would much rather just declare that a set of sites should be on a specific
+release and then let automation take over.
+
While there are no authoritative discussion of imperative vs declarative IoC,
+the following quote from an OVH Tech Blog
+summarizes the current consensus in the industry pretty well:
+
+
In summary declarative infrastructure tools like Terraform and CloudFormation
+offer a much lower overhead to create powerful infrastructure definitions that
+can grow to a massive scale with minimal overheads. The complexities of hierarchy,
+timing, and resource updates are handled by the underlying implementation so
+you can focus on defining what you want rather than how to do it.
+
The additional power and control offered by imperative style languages can be
+a big draw but they also move a lot of the responsibility and effort onto the
+developer, be careful when choosing to take this approach.
We administer the deployment to and the Lagoon configuration of a library site
+in a repository pr. library. The repositories are provisioned via Terraform that
+reads in a central sites.yaml file. The same file is used as input for the
+automated deployment process which renderers the various files contained in the
+repository including a reference to which release of DPL-CMS Lagoon should use.
+
It is still possible to create and maintain sites on Lagoon independent of this
+approach. We can for instance create a separate project for the dpl-cms
+repository to support core development.
We could have run each site as a branch of off a single large repository.
+This was rejected as a possibility as it would have made the administration of
+access to a given libraries deployed revision hard to control. By using individual
+repositories we have the option of grating an outside developer access to a
+full repository without affecting any other.
A record should attempt to capture the situation that led to the need for a
+discrete choice to be made, and then proceed to describe the core of the
+decision, its status and the consequences of the decision.
+
To summaries a ADR could contain the following sections (quoted from the above
+article):
+
+
+
Title: These documents have names that are short noun phrases. For example,
+ "ADR 1: Deployment on Ruby on Rails 3.0.10" or "ADR 9: LDAP for Multitenant Integration"
+
+
+
Context: This section describes the forces at play, including technological
+ , political, social, and project local. These forces are probably in tension,
+ and should be called out as such. The language in this section is value-neutral.
+ It is simply describing facts.
+
+
+
Decision: This section describes our response to these forces. It is stated
+ in full sentences, with active voice. "We will …"
+
+
+
Status: A decision may be "proposed" if the project stakeholders haven't
+ agreed with it yet, or "accepted" once it is agreed. If a later ADR changes
+ or reverses a decision, it may be marked as "deprecated" or "superseded" with
+ a reference to its replacement.
+
+
+
Consequences: This section describes the resulting context, after applying
+ the decision. All consequences should be listed here, not just the "positive"
+ ones. A particular decision may have positive, negative, and neutral consequences,
+ but all of them affect the team and project in the future.
The We use the alertmanager automatically
+ties to the metrics of Prometheus but in order to make it work the configuration
+and rules need to be setup.
Architecture Decision Records (ADR) describes the reasoning behind key
+ decisions made during the design and implementation of the platforms
+ architecture. These documents stands apart from the remaining documentation in
+ that they keep a historical record, while the rest of the documentation is a
+ snapshot of the current system.
The DPL-CMS Drupal sites utilizes a multi-tier caching strategy. HTTP responses
+are cached by Varnish and Drupal caches its various internal data-structures
+in a Redis key/value store.
All inbound requests are passed in to an Ingress Nginx controller which
+ forwards the traffic for the individual sites to their individual Varnish
+ instances.
+
Varnish serves up any static or anonymous responses it has cached from its
+ object-store.
+
If the request is cache miss the request is passed further on Nginx which
+ serves any requests for static assets.
+
If the request is for a dynamic page the request is forwarded to the Drupal-
+ installation hosted by PHP-FPM.
+
Drupal bootstraps, and produces the requested response.
+
During this process it will either populate or reuse it cache which is stored
+ in Redis.
+
Depending on the request Drupal will execute a number of queries against
+ MariaDB and a search index.
DPL-CMS is configured to use Redis as the backend for its core cache as an
+alternative to the default use of the sql-database as backend. This ensures that
+ a busy site does not overload the shared mariadb-server.
A DPL Platform environment consists of a range of infrastructure components
+on top of which we run a managed Kubernetes instance into with we install a
+number of software product. One of these is Lagoon
+which gives us a platform for hosting library sites.
+
An environment is created in two separate stages. First all required
+infrastructure resources are provisioned, then a semi-automated deployment
+process carried out which configures all the various software-components that
+makes up an environment. Consult the relevant runbooks and the
+DPL Platform Infrastructure documents for the
+guides on how to perform the actual installation.
+
This document describes all the parts that makes up a platform environment
+raging from the infrastructure to the sites.
All resources of a Platform environment is contained in a single Azure Resource
+Group. The resources are provisioned via a Terraform setup
+that keeps its resources in a separate resource group.
+
The overview of current platform environments along with the various urls and
+a summary of its primary configurations can be found the
+Current Platform environments document.
+
A platform environment uses the following Azure infrastructure resources.
+
+
+
A virtual Network - with a subnet, configured with access to a number of services.
+
Separate storage accounts for
+
Monitoring data (logs)
+
Lagoon files (eg. results of running user-triggered administrative actions)
+
Backups
+
Drupal site files
+
A MariaDB used to host the sites databases.
+
A Key Vault that holds administrative credentials to resources that Lagoon
+ needs administrative access to.
+
An Azure Kubernetes Service cluster that hosts the platform itself.
+
Two Public IPs: one for ingress one for egress.
+
+
The Azure Kubernetes Service in return creates its own resource group that
+contains a number of resources that are automatically managed by the AKS service.
+AKS also has a managed control-plane component that is mostly invisible to us.
+It has a separate managed identity which we need to grant access to any
+additional infrastructure-resources outside the "MC" resource-group that we
+need AKS to manage.
The Platform consists of a number of software components deployed into the
+AKS cluster. The components are generally installed via Helm,
+and their configuration controlled via values-files.
+
Essential configurations such as the urls for the site can be found in the wiki
+
The following sections will describe the overall role of the component and how
+it integrates with other components. For more details on how the component is
+configured, consult the corresponding values-file for the component found in
+the individual environments configuration
+folder.
Lagoon is an Open Soured Platform As A Service
+created by Amazee. The platform builds on top of a
+Kubernetes cluster, and provides features such as automated builds and the
+hosting of a large number of sites.
Kubernetes does not come with an Ingress Controller out of the box. An ingress-
+controllers job is to accept traffic approaching the cluster, and route it via
+services to pods that has requested ingress traffic.
+
We use the widely used Ingress Nginx
+Ingress controller.
Cert Manager allows an administrator specify
+a request for a TLS certificate, eg. as a part of an Ingress, and have the
+request automatically fulfilled.
+
The platform uses a cert-manager configured to handle certificate requests via
+Let's Encrypt.
Prometheus is a time series database used by the platform
+to store and index runtime metrics from both the platform itself and the sites
+running on the platform.
+
Prometheus is configured to scrape and ingest the following sources
Prometheus is installed via an Operator
+which amongst other things allows us to configure Prometheus and Alertmanager via
+ ServiceMonitor and AlertmanagerConfig.
+
Alertmanager handles
+the delivery of alerts produced by Prometheus.
Grafana provides the graphical user-interface
+to Prometheus and Loki. It is configured with a number of data sources via its
+values-file, which connects it to Prometheus and Loki.
Loki stores and indexes logs produced by the pods
+ running in AKS. Promtail
+streams the logs to Loki, and Loki in turn makes the logs available to the
+administrator via Grafana.
Each individual library has a Github repository that describes which sites
+should exist on the platform for the library. The creation of the repository
+and its contents is automated, and controlled by an entry in a sites.yaml-
+file shared by all sites on the platform.
+
Consult the following runbooks to see the procedures for:
sites.yaml is found in infrastructure/environments/<environment>/sites.yaml.
+The file contains a single map, where the configuration of the
+individual sites are contained under the property sites.<unique site key>, eg.
+
yaml
+sites:
+ # Site objects are indexed by a unique key that must be a valid lagoon, and
+ # github project name. That is, alphanumeric and dashes.
+ core-test1:
+ name: "Core test 1"
+ description: "Core test site no. 1"
+ # releaseImageRepository and releaseImageName describes where to pull the
+ # container image a release from.
+ releaseImageRepository: ghcr.io/danskernesdigitalebibliotek
+ releaseImageName: dpl-cms-source
+ # Sites can optionally specify primary and secondary domains.
+ primary-domain: core-test.example.com
+ # Fully configured sites will have a deployment key generated by Lagoon.
+ deploy_key: "ssh-ed25519 <key here>"
+ bib-ros:
+ name: "Roskilde Bibliotek"
+ description: "Webmaster environment for Roskilde Bibliotek"
+ primary-domain: "www.roskildebib.dk"
+ # The secondary domain will redirect to the primary.
+ secondary-domains: ["roskildebib.dk", "www2.roskildebib.dk"]
+ # A series of sites that shares the same image source may choose to reuse
+ # properties via anchors
+ << : *default-release-image-source
Each platform-site is controlled via a GitHub repository. The repositories are
+provisioned via Terraform. The following depicts the authorization and control-
+flow in use:
+
+
The configuration of each repository is reconciled each time a site is created,
Releases of DPL CMS are deployed to sites via the dpladm
+tool. It consults the sites.yaml file for the environment and performs any
+needed deployment.
We configure all production backups with a backup schedule that ensure that the
+site is backed up at least once a day.
+
Backups executed by the k8up operator follows a backup
+schedule and then uses Restic to perform the backup
+itself. The backups are stored in a Azure Blob Container, see the Environment infrastructure
+for a depiction of its place in the architecture.
+
The backup schedule and retention is configured via the individual sites
+.lagoon.yml. The file is re-rendered from a template every time the a site is
+deployed. The templates for the different site types can be found as a part
+of dpladm.
The following guidelines describe best practices for developing code for the DPL
+Platform project. The guidelines should help achieve:
+
+
A stable, secure and high quality foundation for building and maintaining
+ the platform and its infrastructure.
+
Consistency across multiple developers participating in the project
+
+
Contributions to the core DPL Platform project will be reviewed by members of the
+Core team. These guidelines should inform contributors about what to expect in
+such a review. If a review comment cannot be traced back to one of these
+guidelines it indicates that the guidelines should be updated to ensure
+transparency.
The project follows the Drupal Coding Standards
+and best practices for all parts of the project: PHP, JavaScript and CSS. This
+makes the project recognizable for developers with experience from other Drupal
+projects. All developers are expected to make themselves familiar with these
+standards.
+
The following lists significant areas where the project either intentionally
+expands or deviates from the official standards or areas which developers should
+be especially aware of.
Code comments which describe what an implementation does should only be used
+for complex implementations usually consisting of multiple loops, conditional
+statements etc.
+
Inline code comments should focus on why an unusual implementation has been
+implemented the way it is. This may include references to such things as
+business requirements, odd system behavior or browser inconsistencies.
Commit messages in the version control system help all developers understand the
+current state of the code base, how it has evolved and the context of each
+change. This is especially important for a project which is expected to have a
+long lifetime.
+
Commit messages must follow these guidelines:
+
+
Each line must not be more than 72 characters long
+
The first line of your commit message (the subject) must contain a short
+ summary of the change. The subject should be kept around 50 characters long.
+
The subject must be followed by a blank line
+
Subsequent lines (the body) should explain what you have changed and why the
+ change is necessary. This provides context for other developers who have not
+ been part of the development process. The larger the change the more
+ description in the body is expected.
+
If the commit is a result of an issue in a public issue tracker,
+ platform.dandigbib.dk, then the subject must start with the issue number
+ followed by a colon (:). If the commit is a result of a private issue tracker
+ then the issue id must be kept in the commit body.
+
+
When creating a pull request the pull request description should not contain any
+information that is not already available in the commit messages.
The project aims to automate compliance checks as much as possible using static
+code analysis tools. This should make it easier for developers to check
+contributions before submitting them for review and thus make the review process
+easier.
In general all tools must be able to run locally. This allows developers to get
+quick feedback on their work.
+
Tools which provide automated fixes are preferred. This reduces the burden of
+keeping code compliant for developers.
+
Code which is to be exempt from these standards must be marked accordingly in
+the codebase - usually through inline comments (markdownlint,
+ShellCheck).
+This must also include a human readable reasoning. This ensures that deviations
+do not affect future analysis and the Core project should always pass through
+static analysis.
+
If there are discrepancies between the automated checks and the standards
+defined here then developers are encouraged to point this out so the automated
+checks or these standards can be updated accordingly.
This directory contains the Infrastructure as Code and scripts that are used
+for maintaining the infrastructure-component that each platform environment
+consists of. A "platform environment" is an umbrella term for the Azure
+infrastructure, the Kubernetes cluster, the Lagoon installation and the set of
+GitHub environments that makes up a single DPL Platform installation.
dpladm/: a tool used for deploying individual sites. The tools can
+ be run manually, but the recommended way is via the common infrastructure Taskfile.
+
environments/: contains a directory for each platform environment.
+
terraform: terraform setup and tooling that is shared between
+ environments.
+
task/: Configuration and scripts used by our Taskfile-based automation
+ The scripts included in this directory can be run by hand in an emergency
+ but te recommended way to invoke these via task.
+
Taskfile.yml: the common infrastructure task
+ configuration. Invoke task to get a list of targets. Must be run from within
+ an instance of DPL shell unless otherwise
+ noted.
The environments directory contains a subdirectory for each platform
+environment. You generally interact with the files and directories within the
+directory to configure the environment. When a modification has been made, it
+is put in to effect by running the appropiate task :
+
+
configuration: contains the various configurations the
+ applications that are installed on top of the infrastructure requires. These
+ are used by the support:provision:* tasks.
+
env_repos contains the Terraform root-module for provisioning GitHub site-
+ environment repositories. The module is run via the env_repos:provision task.
+
infrastructure: contains the Terraform root-module used to provision the basic
+ Azure infrastructure components that the platform requires.The module is run
+ via the infra:provision task.
+
lagoon: contains Kubernetes manifests and Helm values-files used for installing
+ the Lagoon Core and Remote that is at the heart of a DPL Platform installation.
+ THe module is run via the lagoon:provision:* tasks.
+
+
Basic usage of dplsh and an environment configuration¶
+
The remaining guides in this document assumes that you work from an instance
+of the DPL shell. See the
+DPLSH Runbook for a basic introduction
+to how to use dplsh.
The following describes how to set up a whole new platform environment to host
+ platform sites.
+
The easiest way to set up a new environment is to create a new environments/<name>
+directory and copy the contents of an existing environment replacing any
+references to the previous environment with a new value corresponding to the new
+environment. Take note of the various URLs, and make sure to update the
+Current Platform environments
+documentation.
+
If this is the very first environment, remember to first initialize the Terraform-
+setup, see the terraform README.md.
When you have prepared the environment directory, launch dplsh and go through
+the following steps to provision the infrastructure:
+
# We export the variable to simplify the example, you can also specify it inline.
+exportDPLPLAT_ENV=dplplat01
+
+# Provision the Azure resources
+taskinfra:provision
+
+# Create DNS record
+CreateanArecordintheadministrationareaofyourDNSprovider.
+Taketheterraformoutput:"ingress_ip"oftheformercommandandcreateanentry
+like:"*.[DOMAN_NAME].[TLD]":"[ingress_ip]"
+
+# Provision the support software that the Platform relies on
+tasksupport:provision
+
The previous step has established the raw infrastructure and the Kubernetes support
+projects that Lagoon needs to function. You can proceed to follow the official
+Lagoon installation procedure.
+
The execution of the individual steps of the guide has been somewhat automated,
+the following describes how to use the automation, make sure to follow along
+in the official documentation to understand the steps and some of the
+additional actions you have to take.
+
# The following must be carried out from within dplsh, launched as described
+# in the previous step including the definition of DPLPLAT_ENV.
+
+# 1. Provision a lagoon core into the cluster.
+tasklagoon:provision:core
+
+# 2. Skip the steps in the documentation that speaks about setting up email, as
+# we currently do not support sending emails.
+
+# 3. Setup ssh-keys for the lagoonadmin user
+# Access the Lagoon UI (consult the platform-environments.md for the url) and
+# log in with lagoonadmin + the admin password that can be extracted from a
+# Kubernetes secret:
+kubectl\
+-ojsonpath="{.data.KEYCLOAK_LAGOON_ADMIN_PASSWORD}"\
+-nlagoon-core\
+getsecretlagoon-core-keycloak\
+|base64--decode
+
+# Then go to settings and add the ssh-keys that should be able to access the
+# lagoon admin user. Consider keeping this list short, and instead add
+# additional users with fewer privileges laster.
+
+# 4. If your ssh-key is passphrase-projected we'll need to setup an ssh-agent
+# instance:
+$eval$(ssh-agent);ssh-add
+
+# 5. Configure the CLI to verify that access (the cli itself has already been
+# installed in dplsh)
+tasklagoon:cli:config
+
+# You can now add additional users, this step is currently skipped.
+
+# (6. Install Harbor.)
+# This step has already been performed as a part of the installation of
+# support software.
+
+# 7. Install a Lagoon Remote into the cluster
+tasklagoon:provision:remote
+
+# 8. Register the cluster administered by the Remote with Lagoon Core
+# Notice that you must provide a bearer token via the USER_TOKEN environment-
+# variable. The token can be found in $HOME/.lagoon.yml after a successful
+# "lagoon login"
+USER_TOKEN=<token>tasklagoon:add:cluster:
+
+
The Lagoon core has now been installed, and the remote registered with it.
+
Setting up a GitHub organization and repositories for a new platform environment¶
+
Prerequisites:
+
+
An properly authenticated azure CLI (az). See the section on initial
+ Terraform setup for more details on the requirements
+
+
First create a new administrative github user and create a new organization
+with the user. The administrative user should only be used for administering
+the organization via terraform and its credentials kept as safe as possible! The
+accounts password can be used as a last resort for gaining access to the account
+and will not be stored in Key Vault. Thus, make sure to store the password
+somewhere safe, eg. in a password-manager or as a physical printout.
+
This requires the infrastructure to have been created as we're going to store
+credentials into the azure Key Vault.
+
# cd into the infrastructure folder and launch a shell
+(host)$cdinfrastructure
+(host)$dplsh
+
+# Remaining commands are run from within dplsh
+
+# export the platform environment name.
+# export DPLPLAT_ENV=<name>, eg
+$exportDPLPLAT_ENV=dplplat01
+
+# 1. Create a ssh keypair for the user, eg by running
+# ssh-keygen -t ed25519 -C "<comment>" -f dplplatinfra01_id_ed25519
+# eg.
+$ssh-keygen-ted25519-C"dplplatinfra@0120211014073225"-fdplplatinfra01_id_ed25519
+
+# 2. Then access github and add the public-part of the key to the account
+# 3. Add the key to keyvault under the key name "github-infra-admin-ssh-key"
+# eg.
+$SECRET_KEY=github-infra-admin-ssh-keySECRET_VALUE=$(catdplplatinfra01_id_ed25519)\
+taskinfra:keyvault:secret:set
+
+# 4. Access GitHub again, and generate a Personal Access Token for the account.
+# The token should
+# - be named after the platform environment (eg. dplplat01-terraform-timestamp)
+# - Have a fairly long expiration - do remember to renew it
+# - Have the following permissions: admin:org, delete_repo, repo
+# 5. Add the access token to Key Vault under the name "github-infra-admin-pat"
+# eg.
+$SECRET_KEY=github-infra-admin-patSECRET_VALUE=githubtokengoesheretaskinfra:keyvault:secret:set
+
+# Our tooling can now administer the GitHub organization
+
+
Renewing the administrative GitHub Personal Access Token¶
+
The Personal Access Token we use for impersonating the administrative GitHub
+user needs to be recreated periodically:
+
# cd into the infrastructure folder and launch a shell
+(host)$cdinfrastructure
+(host)$dplsh
+
+# Remaining commands are run from within dplsh
+
+# export the platform environment name.
+# export DPLPLAT_ENV=<name>, eg
+$exportDPLPLAT_ENV=dplplat01
+
+# 1. Access GitHub, and generate a Personal Access Token for the account.
+# The token should
+# - be named after the platform environment (eg. dplplat01-terraform)
+# - Have a fairly long expiration - do remember to renew it
+# - Have the following permissions: admin:org, delete_repo, repo
+# 2. Add the access token to Key Vault under the name "github-infra-admin-pat"
+# eg.
+$SECRET_KEY=github-infra-admin-patSECRET_VALUE=githubtokengoeshere\
+taskinfra:keyvault:secret:set
+
+# 3. Delete the previous token
+
The setup keeps a single terraform-state pr. environment. Each state is kept as
+separate blobs in a Azure Storage Account.
+
+
Access to the storage account is granted via a Storage Account Key which is
+kept in a Azure Key Vault in the same resource-group. The key vault, storage account
+and the resource-group that contains these resources are the only resources
+that are not provisioned via Terraform.
The following procedure must be carried out before the first environment can be
+created.
+
Prerequisites:
+
+
A Azure subscription
+
An authenticated azure CLI that is allowed to use create resources and grant
+ access to these resources under the subscription including Key Vaults.
+ The easiest way to achieve this is to grant the user the Owner and
+ Key Vault Administrator roles to on subscription.
+
+
Use the scripts/bootstrap-tf.sh for bootstrapping. After the script has been
+run successfully it outputs instructions for how to set up a terraform module
+that uses the newly created storage-account for state-tracking.
+
As a final step you must grant any administrative users that are to use the setup
+permission to read from the created key vault.
The setup uses an integration with DNSimple to set a domain name when the
+environments ingress ip has been provisioned. To use this integration first
+obtain a api-key for
+the DNSimple account. Then use scripts/add-dnsimple-apikey.sh to write it to
+the setups Key Vault and finally add the following section to .dplsh.profile (
+get the subscription id and key vault name from existing export for ARM_ACCESS_KEY).
Inspect the individual module files for documentation of the resources.
+
The following diagram depicts (amongst other things) the provisioned resources.
+Consult the platform environment documentation
+for more details on the role the various resources plays.
+
The dpl-platform-env-repos Terraform module provisions
+the GitHub Git repositories that the platform uses to integrate with Lagoon. Each
+site hosted on the platform has a registry.
+
Inspect variables.tf for a description
+of the required module-variables.
An authenticated az cli (from the host). This likely means az login
+ --tenant TENANT_ID, where the tenant id is that of "DPL Platform". See Azure
+ Portal > Tenant Properties. The logged in user must have permissions to list
+ cluster credentials.
When you want to add a "generic" site to the platform. By Generic we mean a site
+stored in a repository that that is Prepared for Lagoon
+and contains a .lagoon.yml at its root.
+
The current main example of such as site is dpl-cms
+which is used to develop the shared DPL install profile.
# From within dplsh:
+
+# Set an environment,
+# export DPLPLAT_ENV=<platform environment name>
+# eg.
+$exportDPLPLAT_ENV=dplplat01
+
+# If your ssh-key is passphrase-projected we'll need to setup an ssh-agent
+# instance:
+$eval$(ssh-agent);ssh-add
+
+# 1. Authenticate against the cluster and lagoon
+$taskcluster:auth
+$tasklagoon:cli:config
+
+# 2. Add a project
+# PROJECT_NAME=<project name> GIT_URL=<url> task lagoon:project:add
+$PROJECT_NAME=dpl-cmsGIT_URL=git@github.com:danskernesdigitalebibliotek/dpl-cms.git\
+tasklagoon:project:add
+
+# 2.b You can also run lagoon add project manually, consult the documentation linked
+# in the beginning of this section for details.
+
+# 3. Deployment key
+# The project is added, and a deployment key is printed. Copy it and configure
+# the GitHub repository. See the official documentation for examples.
+
+# 4. Webhook
+# Configure Github to post events to Lagoons webhook url.
+# The webhook url for the environment will be
+# https://webhookhandler.lagoon.<environment>.dpl.reload.dk
+# eg for the environment dplplat01
+# https://webhookhandler.lagoon.dplplat01.dpl.reload.dk
+#
+# Referer to the official documentation linked above for an example on how to
+# set up webhooks in github.
+
+# 5. Configure image registry credentials Lagoon should use for the project
+# IF your project references private images in repositories that requires
+# authentication
+# Refresh your Lagoon token.
+$lagoonlogin
+
+# Then export a github personal access-token with pull access.
+# We could pass this to task directly like the rest of the variables but we
+# opt for the export to keep the execution of task a bit shorter.
+$exportVARIABLE_VALUE=<githubpat>
+
+# Then get the project id by listing your projects
+$lagoonlistprojects
+
+# Finally, add the credentials
+$VARIABLE_TYPE_ID=<projectid>\
+VARIABLE_TYPE=PROJECT\
+VARIABLE_SCOPE=CONTAINER_REGISTRY\
+VARIABLE_NAME=GITHUB_REGISTRY_CREDENTIALS\
+tasklagoon:set:environment-variable
+
+# If you get a "Invalid Auth Token" your token has probably expired, generated a
+# new with "lagoon login" and try again.
+
+# 5. Trigger a deployment manually, this will fail as the repository is empty
+# but will serve to prepare Lagoon for future deployments.
+# lagoon deploy branch -p <project-name> -b <branch>
+$lagoondeploybranch-pdpl-cms-bmain
+
For now specify an unique site key (its key in the map of sites), name and
+description. Leave out the deployment-key, you will add it in a later step.
+
Sample entry (beware that this example be out of sync with the environment you
+are operating, so make sure to compare it with existing entries from the
+environment)
The last entry merges in a default set of properties for the source of release-
+images. If the site is on the "programmer" plan, specify a custom set of
+properties like so:
+
sites:
+bib-rb:
+name:"RoskildeBibliotek"
+description:"RoskildeBibliotek"
+primary-domain:"www.roskildebib.dk"
+secondary-domains:["roskildebib.dk"]
+dpl-cms-release:"1.2.3"
+# Github package registry used as an example here, but any registry will
+# work.
+releaseImageRepository:ghcr.io/some-github-org
+releaseImageName:some-image-name
+
+
Be aware that the referenced images needs to be publicly available as Lagoon
+currently only authenticates against ghcr.io.
+
Then continue to provision the a Github repository for the site.
# From within dplsh:
+
+# Set an environment,
+# export DPLPLAT_ENV=<platform environment name>
+# eg.
+$exportDPLPLAT_ENV=dplplat01
+
+# If your ssh-key is passphrase-projected we'll need to setup an ssh-agent
+# instance:
+$eval$(ssh-agent);ssh-add
+
+# 1. Authenticate against the cluster and lagoon
+$taskcluster:auth
+$tasklagoon:cli:config
+
+# 2. Add a project
+# PROJECT_NAME=<project name> GIT_URL=<url> task lagoon:project:add
+$PROJECT_NAME=core-test1GIT_URL=git@github.com:danishpubliclibraries/env-core-test1.git\
+tasklagoon:project:add
+
+# The project is added, and a deployment key is printed, use it for the next step.
+
+# 3. Add the deployment key to sites.yaml under the key "deploy_key".
+$vienvironments/${DPLPLAT_ENV}/sites.yaml
+# Then update the repositories using Terraform
+$taskenv_repos:provision
+
+# 4. Configure image registry credentials Lagoon should use for the project:
+# Refresh your Lagoon token.
+$lagoonlogin
+
+# Then export a github personal access-token with pull access.
+# We could pass this to task directly like the rest of the variables but we
+# opt for the export to keep the execution of task a bit shorter.
+$exportVARIABLE_VALUE=<githubpat>
+
+# Then get the project id by listing your projects
+$lagoonlistprojects
+
+# Finally, add the credentials
+$VARIABLE_TYPE_ID=<projectid>\
+VARIABLE_TYPE=PROJECT\
+VARIABLE_SCOPE=CONTAINER_REGISTRY\
+VARIABLE_NAME=GITHUB_REGISTRY_CREDENTIALS\
+tasklagoon:set:environment-variable
+
+# If you get a "Invalid Auth Token" your token has probably expired, generated a
+# new with "lagoon login" and try again.
+
+# 5. Trigger a deployment manually, this will fail as the repository is empty
+# but will serve to prepare Lagoon for future deployments.
+# lagoon deploy branch -p <project-name> -b <branch>
+$lagoondeploybranch-pcore-test1-bmain
+
+
If you want to deploy a release to the site, continue to
+Deploying a release.
When you want to use the Lagoon API vha. the CLI. You can connect from the DPL
+Shell, or from a local installation of the CLI.
+
Using the DPL Shell requires administrative privileges to the infrastructure while
+a local cli may connect using only the ssh-key associated to a Lagoon user.
+
This runbook documents both cases, as well as how an administrator can extract the
+basic connection details a standard user needs to connect to the Lagoon installation.
Your ssh-key associated with a lagoon user. This has to be done via the Lagoon
+ UI by either you for your personal account, or by an administrator who has
+ access to edit your Lagoon account.
If it is missing, go through the steps below and update the document if you have
+access, or ask someone who has.
+
# Launch dplsh.
+$cdinfrastructure
+$dplsh
+
+# 1. Set an environment,
+# export DPLPLAT_ENV=<platform environment name>
+# eg.
+$exportDPLPLAT_ENV=dplplat01
+
+# 2. Authenticate against AKS, needed by infrastructure and Lagoon tasks
+$taskcluster:auth
+
+# 3. Generate the Lagoon CLI configuration and authenticate
+# The Lagoon CLI is authenticated via ssh-keys. DPLSH will mount your .ssh
+# folder from your homedir, but if your keys are passphrase protected, we need
+# to unlock them.
+$eval$(ssh-agent);ssh-add
+# Authorize the lagoon cli
+$tasklagoon:cli:config
+
+# List the connection details
+$lagoonconfiglist
+
# 1. Make any changes to the sites entry sites.yml you need.
+# 2. (optional) diff the deployment
+DIFF=1SITE=<sitename>tasksite:sync
+# 3a. Synchronize the site, triggering a deployment if the current state differs
+# from the intended state in sites.yaml
+SITE=<sitename>tasksite:sync
+# 3b. If the state does not differ but you still want to trigger a deployment,
+# specify FORCE=1
+FORCE=1SITE=<sitename>tasksite:sync
+
+
The following diagram outlines the full release-flow starting from dpl-cms (or
+a fork) and to the release is deployed:
+
This directory contains our operational runbooks for standard procedures
+you may need to carry out while maintaining and operating a DPL Platform
+environment.
+
Most runbooks has the following layout.
+
+
Title - Short title that follows the name of the markdown file for quick
+ lookup.
+
When to use - Outlines when this runbook should be used.
+
Prerequisites - Any requirements that should be met before the procedure is
+ followed.
+
Procedure - Stepwise description of the procedure, sometimes these will
+ be whole subheadings, sometimes just a single section with lists.
+
+
The runbooks should focus on the "How", and avoid explaining any.
Your first step should be to secure any backups you think might be relevant to
+archive. Whether this step is necessary depends on the site. Consult the
+Retrieve and Restore backups runbook for the
+operational steps.
+
You are now ready to perform the actual removal of the site.
+
# Launch dplsh.
+$cdinfrastructure
+$dplsh
+
+# You are assumed to be inside dplsh from now on.
+
+# 1. Set an environment,
+# export DPLPLAT_ENV=<platform environment name>
+# eg.
+$exportDPLPLAT_ENV=dplplat01
+
+# 2. Setup access to ssh-keys so that the lagoon cli can authenticate.
+$eval$(ssh-agent);ssh-add
+
+# 3. Authenticate against lagoon
+$tasklagoon:cli:config
+
+# 4. Delete the project from Lagoon
+# lagoon delete project --project <site machine-name>
+$lagoondeleteproject--projectcore-test1
+
+# 5. Authenticate against kubernetes
+$taskcluster:auth
+
+# 6. List the namespaces
+# Identify all the project namespace with the syntax <sitename>-<branchname>
+# eg "core-test1-main" for the main branch for the "core-test1" site.
+$kubectlgetns
+
+# 7. Delete each site namespace
+# kubectl delete ns <namespace>
+# eg.
+$kubectldeletenscore-test1-main
+
+# 8. Edit sites.yaml, remove the the entry for the site
+$vienvironments/${DPLPLAT_ENV}/sites.yaml
+# Then have Terraform delete the sites repository.
+$taskenv_repos:provision
+
While most steps are different for file and database backups, step 1 is close to
+identical for the two guides.
+
Be aware that the guide instructs you to copy the backups to /tmp inside the
+cli pod. Depending on the resources available on the node /tmp may not have
+enough space in which case you may need to modify the cli deployment to add
+a temporary volume, or place the backup inside the existing /site/default/files
+folder.
Access the sites environment ("Main" for production)
+
Click on the "Backups" tab
+
Click on the "Retrieve" button for the backups you wish to download and/or
+ restore. Use to "Source" column to differentiate the types of backups.
+ "nginx" are backups of the sites files, while "mariadb" are backups of the
+ sites database.
+
The Buttons changes to "Downloading..." when pressed, wait for them to
+ change to "Download", then click them again to download the backup
To restore the database we must first copy the backup into a running cli-pod
+for a site, and then import the database-dump on top of the running site.
+
+
Copy the uncompressed mariadb sql file you want to restore into the dpl-platform/infrastructure
+ folder from which we will launch dplsh
+
Launch dplsh from the infrastructure folder (instructions)
+ and follow the procedure below:
+
+
# 1. Authenticate against the cluster.
+$taskcluster:auth
+
+# 2. List the namespaces to identify the sites namespace
+# The namespace will be on the form <sitename>-<branchname>
+# eg "bib-rb-main" for the "main" branch for the "bib-rb" site.
+$kubectlgetns
+
+# 3. Export the name of the namespace as SITE_NS
+# eg.
+$exportSITE_NS=bib-rb-main
+
+# 4. Copy the *mariadb.sql file to the CLI pod in the sites namespace
+# eg.
+kubectlcp\
+-n$SITE_NS\
+*mariadb.sql\
+$(kubectl-n$SITE_NSgetpod-lapp.kubernetes.io/instance=cli-ojsonpath="{.items[0].metadata.name}"):/tmp/database-backup.sql
+
+# 5. Have drush inside the CLI-pod import the database and clear out the backup
+kubectlexec\
+-n$SITE_NS\
+deployment/cli\
+--\
+bash-c" \
+ echo Verifying file \
+ && test -s /tmp/database-backup.sql \
+ || (echo database-backup.sql is missing or empty && exit 1) \
+ && echo Dropping database \
+ && drush sql-drop -y \
+ && echo Importing backup \
+ && drush sqlc < /tmp/database-backup.sql \
+ && echo Clearing cache \
+ && drush cr \
+ && rm /tmp/database-backup.sql
+ "
+
To restore backed up files into a site we must first copy the backup into a
+running cli-pod for a site, and then rsync the files on top top of the running
+site.
+
+
Copy tar.gz file into the dpl-platform/infrastructure folder from which we
+ will launch dplsh
+
Launch dplsh from the infrastructure folder (instructions)
+ and follow the procedure below:
+
+
# 1. Authenticate against the cluster.
+$taskcluster:auth
+
+# 2. List the namespaces to identify the sites namespace
+# The namespace will be on the form <sitename>-<branchname>
+# eg "bib-rb-main" for the "main" branch for the "bib-rb" site.
+$kubectlgetns
+
+# 3. Export the name of the namespace as SITE_NS
+# eg.
+$exportSITE_NS=bib-rb-main
+
+# 4. Copy the files tar-ball into the CLI pod in the sites namespace
+# eg.
+kubectlcp\
+-n$SITE_NS\
+backup*-nginx-*.tar.gz\
+$(kubectl-n$SITE_NSgetpod-lapp.kubernetes.io/instance=cli-ojsonpath="{.items[0].metadata.name}"):/tmp/files-backup.tar.gz
+
+# 5. Replace the current files with the backup.
+# The following
+# - Verifies the backup exists
+# - Removes the existing sites/default/files
+# - Un-tars the backup into its new location
+# - Fixes permissions and clears the cache
+# - Removes the backup archive
+#
+# These steps can also be performed one by one if you want to.
+kubectlexec\
+-n$SITE_NS\
+deployment/cli\
+--\
+bash-c" \
+ echo Verifying file \
+ && test -s /tmp/files-backup.tar.gz \
+ || (echo files-backup.tar.gz is missing or empty && exit 1) \
+ && tar ztf /tmp/files-backup.tar.gz data/nginx &> /dev/null \
+ || (echo could not verify the tar.gz file files-backup.tar && exit 1) \
+ && test -d /app/web/sites/default/files \
+ || (echo Could not find destination /app/web/sites/default/files \
+ && exit 1) \
+ && echo Removing existing sites/default/files \
+ && rm -fr /app/web/sites/default/files \
+ && echo Unpacking backup \
+ && mkdir -p /app/web/sites/default/files \
+ && tar --strip 2 --gzip --extract --file /tmp/files-backup.tar.gz \
+ --directory /app/web/sites/default/files data/nginx \
+ && echo Fixing permissions \
+ && chmod -R 777 /app/web/sites/default/files \
+ && echo Clearing cache \
+ && drush cr \
+ && echo Deleting backup archive \
+ && rm /tmp/files-backup.tar.gz
+ "
+
+# NOTE: In some situations some files in /app/web/sites/default/files might
+# be locked by running processes. In that situations delete all the files you
+# can from /app/web/sites/default/files manually, and then repeat the step
+# above skipping the removal of /app/web/sites/default/files
+
Access the environments Grafana installation - consult the
+ platform-environments.md for the url.
+
Select "Explorer" in the left-most menu and select the "Loki" data source in
+ the top.
+
Query the logs for the environment by either
+
Use the Log Browser to pick the namespace for the site. It will follow the
+ pattern <sitename>-<branchname>.
+
Do a custom LogQL, eg. to
+ fetch all logs from the nginx container for the site "main" branch of the
+ "rdb" site do query on the form
+ {app="nginx-php-persistent",container="nginx",namespace="rdb-main"}
+
Eg, for the main branch for the site "rdb": {namespace="rdb-main"}
+
Click "Inspector" -> "Data" -> "Download Logs" to download the log lines.
There are multiple approaches to scaling AKS. We run with the auto-scaler enabled
+which means that in most cases the thing you want to do is to adjust the max or
+minimum configuration for the autoscaler.
Edit the infrastructure configuration for your environment. Eg for dplplat01
+edit dpl-platform/dpl-platform/infrastructure/environments/dplplat01/infrastructure/main.tf.
+
Adjust the *_count_min / *_count_min corrospodning to the node-pool you
+want to grow/shrink.
+
Then run infra:provision to have terraform effect the change.
When you wish to set an environment variable on a site. The variable can be
+available for either all sites in the project, or for a specific site in the
+project.
+
The variables are safe for holding secrets, and as such can be used both for
+"normal" configuration values, and secrets such as api-keys.
+
The variabel will be available to all containers in the environment can can be
+picked up and parsed eg. in Drupals settings.php.
# From within a dplsh session authorized to use your ssh keys:
+
+# 1. Authenticate against the cluster and lagoon
+$taskcluster:auth
+$tasklagoon:cli:config
+
+# 2. Refresh your Lagoon token.
+$lagoonlogin
+
+# 3. Then get the project id by listing your projects
+$lagoonlistprojects
+
+# 4. Optionally, if you wish to set a variable for a single environment - eg a
+# pull-request site, list the environments in the project
+$lagoonlistenvironments-p<projectname>
+
+# 5. Finally, set the variable
+# - Set the the type_id to the id of the project or environment depending on
+# whether you want all or a single environment to access the variable
+# - Set scope to VARIABLE_TYPE or ENVIRONMENT depending on the choice above
+# - set
+$VARIABLE_TYPE_ID=<projectorenvironmentid>\
+VARIABLE_TYPE=<PROJECTorENVIRONMENT>\
+VARIABLE_SCOPE=RUNTIME\
+VARIABLE_NAME=<yourvariablename>\
+VARIABLE_VALUE=<yourvariablevalue>\
+tasklagoon:set:environment-variable
+
+# If you get a "Invalid Auth Token" your token has probably expired, generated a
+# new with "lagoon login" and try again.
+
+
The variable will be available the next time the site is deployed by Lagoon.
+Use the deployment runbook to trigger a deployment, use
+a forced deployment if you do not have a new release to deploy.
We use Terraform to upgrade AKS. Should you need to do a manual upgrade consult
+Azures documentation on upgrading a cluster
+and on upgrading node pools.
+Be aware in both cases that the Terraform state needs to be brought into sync
+via some means, so this is not a recommended approach.
+
Find out which versions of kubernetes an environment can upgrade to¶
+
In order to find out which versions of kubernetes we can upgrade to, we need to
+use the following command:
+
taskcluster:get-upgrades
+
+
This will output a table of in which the column "Upgrades" lists the available
+upgrades for the highest available minor versions.
+
A Kubernetes cluster can can at most be upgraded to the nearest minor version,
+which means you may be in a situation where you have several versions between
+you and the intended version.
+
Minor versions can be skipped, and AKS will accept a cluster being upgraded to
+a version that does not specify a patch version. So if you for instance want
+to go from 1.20.9 to 1.22.15, you can do 1.21, and then 1.22.15. When
+upgrading to 1.21 Azure will substitute the version for an the hightest available
+patch version, e.g. 1.21.14.
+
You should know know which version(s) you need to upgrade to, and can continue to
+the actual upgrade.
Update the control_plane_version reference in infrastructure/environments/<environment>/infrastructure/main.tf
+ and run task infra:provision to apply. You can skip patch-versions, but you
+ can only do one minor-version at the time
+
+
+
Monitor the upgrade as it progresses. A control-plane upgrade is usually performed
+ in under 5 minutes.
+
+
+
Monitor via eg.
+
watch-n5kubectlversion
+
+
Then upgrade the system, admin and application node-pools in that order one by
+one.
+
+
+
Update the pool_[name]_version reference in
+ infrastructure/environments/<environment>/infrastructure/main.tf.
+ The same rules applies for the version as with control_plane_version.
+
+
+
Monitor the upgrade as it progresses. Expect the provisioning of and workload
+ scheduling to a single node to take about 5-10 minutes. In particular be aware
+ that the admin node-pool where harbor runs has a tendency to take a long time
+ as the harbor pvcs are slow to migrate to the new node.
A running dplsh launched from ./infrastructure with
+ DPLPLAT_ENV set to the platform environment name.
+
Knowledge about the version of Lagoon you want to upgrade to.
+
You can extract version (= chart version) and appVersion (= lagoon release
+ version) for the lagoon-remote / lagoon-core charts via the following commands
+ (replace lagoon-core for lagoon-remote if necessary).
Knowledge of any breaking changes or necessary actions that may affect the
+ platform when upgrading. See chart release notes for all intermediate chart
+ releases.
Identify the version you want to bump in the environment/configuration directory
+ eg. for dplplat01 infrastructure/environments/dplplat01/configuration/versions.env.
+ The file contains links to the relevant Artifact Hub pages for the individual
+ projects and can often be used to determine both the latest version, but also
+ details about the chart such as how a specific manifest is used.
+ You can find the latest version of the support workload in the
+ Version status
+ sheet which itself is updated via the procedure described in the
+ Update Upgrade status runbook.
+
+
+
Consult any relevant changelog to determine if the upgrade will require any
+ extra work beside the upgrade itself.
+ To determine which version to look up in the changelog, be aware of the difference
+ between the chart version
+ and the app version.
+ We currently track the chart versions, and not the actual version of the
+ application inside the chart. In order to determine the change in appVersion
+ between chart releases you can do a diff between releases, and keep track of the
+ appVersion property in the charts Chart.yaml. Using using grafana as an example:
+ https://github.com/grafana/helm-charts/compare/grafana-6.55.1...grafana-6.56.0.
+ The exact way to do this differs from chart to chart, and is documented in the
+ Specific producedures and tests below.
+
+
+
Carry out any chart-specific preparations described in the charts update-procedure.
+ This could be upgrading a Custom Resource Definition that the chart does not
+ upgrade.
+
+
+
Identify the relevant task in the main Taskfile
+ for upgrading the workload. For example, for cert-manager, the task is called
+ support:provision:cert-manager and run the task with DIFF=1, eg
+ DIFF=1 task support:provision:cert-manager.
+
+
+
If the diff looks good, run the task without DIFF=1, eg task support:provision:cert-manager.
+
+
+
Then proceeded to perform the verification test for the relevant workload. See
+ the following section for known verification tests.
The project project versions its Helm chart together with the app itself. So,
+simply use the chart version in the following checks.
+
Cert Manager keeps Release notes
+for the individual minor releases of the project. Consult these for every
+upgrade past a minor version.
+
As both are versioned in the same repository, simply use the following link
+for looking up the release notes for a specific patch release, replacing the
+example tag with the version you wish to upgrade to.
The note will most likely be empty. Now diff the chart version with the current
+version, again replacing the version with the relevant for your releases.
As the repository contains a lot of charts, you will need to do a bit of
+digging. Look for at least charts/grafana/Chart.yaml which can tell you the
+app version.
+
With the app-version in hand, you can now look at the release notes for the
+grafana app
+itself.
Verify that the Grafana pods are all running and healthy.
+
kubectlgetpods--namespacegrafana
+
+
Access the Grafana UI and see if you can log in. If you do not have a user
+but have access to read secrets in the grafana namespace, you can retrive the
+admin password with the following command:
+
# Password for admin
+UI_NAME=grafanataskui-password
+
+# Hostname for grafana
+kubectl-ngrafanaget-ojsonpath="{.spec.rules[0].host}"ingressgrafana;echo
+
Having identified the relevant appVersions, consult the list of
+Harbor releases to see a description
+of the changes included in the release in question. If this approach fails you
+can also use the diff-command described below to determine which image-tags are
+going to change and thus determine the version delta.
+
Harbor is a quite active project, so it may make sense mostly to pay attention
+to minor/major releases and ignore the changes made in patch-releases.
Harbor documents the general upgrade procedure for non-kubernetes upgrades for minor
+versions on their website.
+This documentation is of little use to our Kubernetes setup, but it can be useful
+to consult the page for minor/major version upgrades to see if there are any
+special considerations to be made.
+
The Harbor chart repository has upgrade instructions as well.
+The instructions asks you to do a snapshot of the database and backup the tls
+secret. Snapshotting the database is currently out of scope, but could be a thing
+that is considered in the future. The tls secret is handled by cert-manager, and
+as such does not need to be backed up.
+
With knowledge of the app version, you can now update versions.env as described
+in the General Procedure section, diff to see the changes
+that are going to be applied, and finally do the actual upgrade.
When Harbor seems to be working, you can verify that the UI is working by
+accessing https://harbor.lagoon.dplplat01.dpl.reload.dk/. The password for
+the user admin can be retrived with the following command:
+
UI_NAME=harbortaskui-password
+
+
If everything looks good, you can consider to deploying a site. One way to do this
+is to identify an existing site of low importance, and re-deploy it. A re-deploy
+will require Lagoon to both fetch and push images. Instructions for how to access
+the lagoon UI is out of scope of this document, but can be found in the runbook
+for running a lagoon task. In this case you are looking
+for the "Deploy" button on the sites "Deployments" tab.
When working with the ingress-nginx chart we have at least 3 versions to keep
+track off.
+
The chart version tracks the version of the chart itself. The charts appVersion
+tracks a controller application which dynamically configures a bundles nginx.
+The version of nginx used is determined configuration-files in the controller.
+Amongst others the
+ingress-nginx.yaml.
With knowledge of the app version, you can now update versions.env as described
+in the General Procedure section, diff to see the changes
+that are going to be applied, and finally do the actual upgrade.
The ingress-controller is very central to the operation of all public accessible
+parts of the platform. It's area of resposibillity is on the other hand quite
+narrow, so it is easy to verify that it is working as expected.
+
First verify that pods are coming up
+
kubectl-ningress-nginxgetpods
+
+
Then verify that the ingress-controller is able to serve traffic. This can be
+done by accessing the UI of one of the apps that are deployed in the platform.
The Loki chart is versioned separatly from Loki. The version of Loki installed
+by the chart is tracked by its appVersion. So when upgrading, you should always
+look at the diff between both the chart and app version.
+
The general upgrade procedure will give you the chart version, access the
+following link to get the release note for the chart. Remember to insert your
+version:
Notice that the Loki helm-chart is maintained in the same repository as Loki
+itself. You can find the diff between the chart versions by comparing two
+chart release tags.
As the repository contains changes to Loki itself as well, you should seek out
+the file production/helm/loki/Chart.yaml which contains the appVersion that
+defines which version of Loki a given chart release installes.
List pods in the loki namespace to see if the upgrade has completed
+successfully.
+
kubectl--namespacelokigetpods
+
+
Next verify that Loki is still accessibel from Grafana and collects logs by
+logging in to Grafana. Then verify the Loki datasource, and search out some
+logs for a site. See the validation steps for Grafana
+for instructions on how to access the Grafana UI.
This means that the following comparison between two releases of the chart
+will also contain changes to a number of other charts. You will have to look
+for changes in the charts/kube-prometheus-stack/ directory.
The Readme for the chart contains a good Upgrading Chart
+section that describes things to be aware of when upgrading between specific
+minor and major versions. The same documentation can also be found on
+artifact hub.
+
Consult the section that matches the version you are upgrading from and to. Be
+aware that upgrades past a minor version often requires a CRD update. The
+CRDs may have to be applied before you can do the diff and upgrade. Once the
+CRDs has been applied you are committed to the upgrade as there is no simple
+way to downgrade the CRDs.
List pods in the prometheus namespace to see if the upgrade has completed
+successfully. You should expect to see two types of workloads. First a single
+a single promstack-kube-prometheus-operator pod that runs Prometheus, and then
+a promstack-prometheus-node-exporter pod for each node in the cluster.
As the Prometheus UI is not directly exposed, the easiest way to verify that
+Prometheus is running is to access the Grafana UI and verify that the dashboards
+that uses Prometheus are working, or as a minimum that the prometheus datasource
+passes validation. See the validation steps for Grafana
+for instructions on how to access the Grafana UI.
The Promtail chart is versioned separatly from Promtail which itself is a part of
+Loki. The version of Promtail installed by the chart is tracked by its appVersion.
+So when upgrading, you should always look at the diff between both the chart and
+app version.
+
The general upgrade procedure will give you the chart version, access the
+following link to get the release note for the chart. Remember to insert your
+version:
The note will most likely be empty. Now diff the chart version with the current
+version, again replacing the version with the relevant for your releases.
As the repository contains a lot of charts, you will need to do a bit of
+digging. Look for at least charts/promtail/Chart.yaml which can tell you the
+app version.
+
With the app-version in hand, you can now look at the
+release notes for Loki
+(which promtail is part of). Look for notes in the Promtail sections of the
+release notes.
List pods in the promtail namespace to see if the upgrade has completed
+successfully.
+
kubectl--namespacepromtailgetpods
+
+
With the pods running, you can verify that the logs are being collected seeking
+out logs via Grafana. See the validation steps for Grafana
+for details on how to access the Grafana UI.
+
You can also inspect the logs of the individual pods via
An authorized Azure az cli. The version should match the version found in
+ FROM mcr.microsoft.com/azure-cli:version in the dplsh Dockerfile
+ You can choose to authorize the az cli from within dplsh, but your session
+ will only last as long as the shell-session. The use you authorize as must
+ have permission to read the Terraform state from the Terraform setup
+ , and Contributor permissions on the environments
+ resource-group in order to provision infrastructure.
+
dplsh.sh symlinked into your path as dplsh, see Launching the Shell
+ (optional, but assumed below)
# Launch dplsh.
+$cdinfrastructure
+$dplsh
+
+# 1. Set an environment,
+# export DPLPLAT_ENV=<platform environment name>
+# eg.
+$exportDPLPLAT_ENV=dplplat01
+
+# 2a. Authenticate against AKS, needed by infrastructure and Lagoon tasks
+$taskcluster:auth
+
+# 2b - if you want to use the Lagoon CLI)
+# The Lagoon CLI is authenticated via ssh-keys. DPLSH will mount your .ssh
+# folder from your homedir, but if your keys are passphrase protected, we need
+# to unlock them.
+$eval$(ssh-agent);ssh-add
+# Then authorize the lagoon cli
+$tasklagoon:cli:config
+
We are not able to persist and execute a users intentions across page loads.
+This is expressed through a number of issues. The main agitator is maintaining
+intent whenever a user tries to do anything that requires them to be
+authenticated. In these situations they get redirected off the page and after a
+successful login they get redirected back to the origin page but without the
+intended action fulfilled.
+
One example is the AddToChecklist functionality. Whenever a user wants to add
+a material to their checklist they click the "Tilføj til huskelist" button next
+to the material presentation.
+They then get redirected to Adgangsplatformen.
+After a successful login they get redirected back to the material page but the
+material has not been added to their checklist.
After an intent has been stated we want the intention to be executed even though
+a page reload comes in the way.
+
We move to implementing what we define as an explicit intention before the
+actual action is tried for executing.
+
+
User clicks the button.
+
Intent state is generated and committed.
+
Implementation checks if the intended action meets all the requirements. In
+ this case, being logged in and having the necessary payload.
+
If the intention meets all requirements we then fire the addToChecklist
+ action.
+
Material is added to the users checklist.
+
+
The difference between the two might seem superfluous but the important
+distinction to make is that with our current implementation we are not able to
+serialize and persist the actions as the application state across page loads. By
+defining intent explicitly we are able to serialize it and persist it between
+page loads.
+
This resolves in the implementation being able to rehydrate the persisted state,
+look at the persisted intentions and have the individual application
+implementations decide what to do with the intention.
+
A mock implementation of the case by case business logic looks as follows.
+
const initialStore = {
+ authenticated: false,
+ intent: {
+ status: '',
+ payload: {}
+ }
+}
+
+const fulfillAction = store.authenticated &&
+ (store.intent.status === 'pending' || store.intent.status === 'tried')
+const getRequirements = !store.authenticated && store.intent.status === 'pending'
+const abandonIntention = !store.authenticated && store.intent.status === 'tried'
+
+function AddToChecklist ({ materialId, store }) {
+ useEffect(() => {
+ if (fulfillAction) {
+ // We fire the actual functionality required to add a material to the
+ // checklist and we remove the intention as a result of it being
+ // fulfilled.
+ addToChecklistAction(store, materialId)
+ } else if (getRequirements) {
+ // Before we redirect we set the status to be "tried".
+ redirectToLogin(store)
+ } else if (abandonIntention) {
+ // We abandon the intent so that we won't have an infinite loop of retries
+ // at every page load.
+ abandonAddToChecklistIntention(store)
+ }
+ }, [materialId, store.intent.status])
+ return (
+ <button
+ onClick={() => {
+ // We do not fire the actual logic that is required to add a material to
+ // the checklist. Instead we add the intention of said action to the
+ // store. This is when we would set the status of the intent to pending
+ // and provide the payload.
+ addToChecklistIntention(store, materialId)
+ }}
+ >
+ Tilføj til huskeliste
+ </button>
+ )
+}
+
+
We utilize session storage to persist the state on the client due to it's short
+lived nature and porous features.
+
We choose Redux as the framework to implemenent this. Redux is a blessed choice
+in this instance. It has widespread use, an approachable design and is
+well-documented. The best way to go about a current Redux implementation as of
+now is @reduxjs/toolkit. Redux is a
+sufficiently advanced framework to support other uses of application state and
+even co-locating shared state between applications.
+
For our persistence concerns we want to use the most commonly used tool for
+that, redux-persist. There are some
+implementation details
+to take into consideration when integrating the two.
We could persist the intentions in the URL that is delivered back to the client
+after a page reload. This would still imply some of the architectural decisions
+described in Decision in regards to having an "intent" state, but some of the
+different status flags etc. would not be needed since state is virtually shared
+across page loads in the url. However this simpler solution cannot handle more
+complex situations than what can be described in the URL feasibly.
React offers useContext()
+for state management as an alternative to Redux.
+
We prefer Redux as it provides a more complete environment when working with
+state management. There is already a community of established practices and
+libraries which integrate with Redux. One example of this is our need to persist
+actions. When using Redux we can handle this with redux-persist. With
+useContext() we would have to roll our own implementation.
+
Some of the disadvantages of using Redux e.g. the amount of required boilerplate
+code are addressed by using @reduxjs/toolkit.
It has been decided that app context/settings should be passed from the
+server side rendered mount points via data props.
+One type of settings is text strings that is defined by
+the system/host rendering the mount points.
+Since we are going to have quite some levels of nested components
+it would be nice to have a way to extract the string
+without having to define them all the way down the tree.
A solution has been made that extracts the props holding the strings
+and puts them in the Redux store under the index: text at the app entry level.
+That is done with the help of the withText() High Order Component.
+The solution of having the strings in redux
+enables us to fetch the strings at any point in the tree.
+A hook called: useText() makes it simple to request a certain string
+inside a given component.
One major alternative would be not doing it and pass down the props.
+But that leaves us with text props all the way down the tree
+which we would like to avoid.
+Some translation libraries has been investigated
+but that would in most cases give us a lot of tools and complexity
+that is not needed in order to solve the relatively simple task.
Since we omit the text props down the tree
+it leaves us with fewer props and a cleaner component setup.
+Although some "magic" has been introduced
+with text prop matching and storage in redux,
+it is outweighed by the simplicity of the HOC wrapper and useText hook.
As a part of the project, we need to implement a dropdown autosuggest component.
+This component needs to be complient with modern website accessibility rules:
+
+
The component dropdown items are accessible by keyboard by using arrow keys -
+down and up.
+
The items visibly change when they are in current focus.
+
Items are selectable using the keyboard.
+
+
Apart from these accessibility features, the component needs to follow a somewhat
+complex design described in
+this Figma file.
+As visible in the design this autosuggest dropdown doesn't consist only of single
+line text items, but also contains suggestions for specific works - utilizing
+more complex suggestion items with cover pictures, and release years.
Our research on the most popular and supported javascript libraries heavily leans
+on this specific article.
+In combination with our needs described above in the context section, but also
+considering what it would mean to build this component from scratch without any
+libraries, the decision taken favored a library called
+Downshift.
+
This library is the second most popular JS library used to handle autsuggest
+dropdowns, multiselects, and select dropdowns with a large following and
+continuous support. Out of the box, it follows the
+ARIA principles, and
+handles problems that we would normally have to solve ourselves (e.g. opening and
+closing of the dropdown/which item is currently in focus/etc.).
+
Another reason why we choose Downshift over its peer libraries is the amount of
+flexibility that it provides. In our eyes, this is a strong quality of the library
+that allows us to also implement more complex suggestion dropdown items.
Staying informed about how the size of the JavaScript we require browsers to
+download to use the project plays an important part in ensuring a performant
+solution.
+
We currently have no awareness of this in this project and the result surfaces
+down the line when the project is integrated with the CMS, which is
+tested with Lighthouse.
+
To address this we want a solution that will help us monitor the changes to the
+size of the bundle we ship for each PR.
Bundlewatch and its ancestor, bundlesize
+combine a CLI tool and a web app to provide bundle analysis and feedback on
+GitHub as status checks.
+
These solutions no longer seem to be actively maintained. There are several
+bugs that would affect
+us and fixes remain unmerged. The project relies on a custom secret instead of
+GITHUB_TOKEN. This makes supporting our fork-based development workflow
+harder.
This is a GitHub Action which can be used in configurations where statistics
+for two bundles are compared e.g. for the base and head of a pull request. This
+results in a table of changes displayed as a comment in the pull request.
+This is managed using GITHUB_TOKEN.
We could have used our own implementation of the problem.
+But since it is a common problem we might as well use a community backed solution.
+And react-use gives us a wealth of other tools.
We can now use useDeepCompareEffect instead of useEffect
+in cases where we have arrays or objects amomg the dependencies.
+And we can make use of all the other utility hooks that the package provides.
The code base is growing and so does the number of functions and custom hooks.
+
While we have a good coverage in our UI tests from Cypress we are lacking
+something to tests the inner workings of the applications.
+
With unit tests added we can test bits of functionality that is shared
+between different areas of the application and make sure that we get the
+expected output giving different variations of input.
We decided to go for Vitest which is an easy to use and
+very fast unit testing tool.
+
It has more or less the same capabilities as Jest
+which is another popular testing framework which is similar.
+
Vitest is framework agnostic so in order to make it possible to test hooks
+we found @testing-library/react-hooks
+that works in conjunction with Vitest.
We could have used Jest. But trying that we experienced major problems
+with having both Jest and Cypress in the same codebase.
+They have colliding test function names and Typescript could not figure it out.
+
There is probably a solution but at the same time we got Vitest recommended.
+It seemed very fast and just as capable as Jest. And we did not have the
+colliding issues of shared function/object names.
Dpl-cms is a cms system based on Drupal, where the system administrators can set
+up campaigns they want to show to their users. Drupal also allows the cms system
+to act as an API endpoint that we then can contact from our apps.
+
The cms administrators can specify the content (image, text) and the visibility
+criteria for each campaign they create. The visibility criteria is based on
+search filter facets.
+Does that sound familiar? Yes, we use another API to get that very data
+in THIS project - in the search result app.
+The facets differ based on the search string the user uses for their search.
+
As an example, the dpl-cms admin could wish to show a Harry Potter related
+campaign to all the users whose search string retreives search facets which
+have "Harry Potter" as one of the most relevant subjects.
+Campaigns in dpl-cms can use triggers such as subject, main language, etc.
An example code snippet for retreiving a campaign from our react apps would then
+look something like this:
+
// Creating a state to store the campaign data in.
+const[campaignData,setCampaignData]=useState<CampaignMatchPOST200|null>(
+null
+);
+
+// Retreiving facets for the campaign based on the search query and existing
+// filters.
+const{facets:campaignFacets}=useGetFacets(q,filters);
+
+// Using the campaign hook generated by Orval from src/core/dpl-cms/dpl-cms.ts
+// in order to get the mutate function that lets us retreive campaigns.
+const{mutate}=useCampaignMatchPOST();
+
+// Only fire the campaign data call if campaign facets or the mutate function
+// change their value.
+useDeepCompareEffect(()=>{
+if(campaignFacets){
+mutate(
+{
+data:campaignFacetsasCampaignMatchPOSTBodyItem[]
+},
+{
+onSuccess:(campaign)=>{
+setCampaignData(campaign);
+},
+onError:()=>{
+// Handle error.
+}
+}
+);
+}
+},[campaignFacets,mutate]);
+
+
Showing campaigns in dpl-react when in development mode¶
+
You first need to make sure to have a campaign set up in your locally running
+dpl-cms (
+run this repo locally)
+Then, in order to see campaigns locally in dpl-react in development mode, you
+will most likely need a browser plugin such as Google Chrome's
+"Allow CORS: Access-Control-Allow-Origin"
+in order to bypass CORS policy for API data calls.
The following guidelines describe best practices for developing code for React
+components for the Danish Public Libraries CMS project. The guidelines should
+help achieve:
+
+
A stable, secure and high quality foundation for building and maintaining
+ client-side JavaScript components for library websites
+
Consistency across multiple developers participating in the project
+
The best possible conditions for sharing components between library websites
+
The best possible conditions for the individual library website to customize
+ configuration and appearance
+
+
Contributions to the DDB React project will be reviewed by members of the Core
+team. These guidelines should inform contributors about what to expect in such a
+review. If a review comment cannot be traced back to one of these guidelines it
+indicates that the guidelines should be updated to ensure transparency.
Airbnb's standard is one of the best known and most used in the JavaScript
+ ccoding standard landscape.
+
Airbnb’s standard is both comprehensive and well documented.
+
Airbnb’s standards cover both JavaScript in general React/JSX specifically.
+ This avoids potential conflicts between multiple standards.
+
+
The following lists significant areas where the project either intentionally
+expands or deviates from the official standards or areas which developers should
+be especially aware of.
This project sticks to the above guideline as well. If we need to pass a
+function as part of a callback or in a promise chain and we on top of that need
+to pass some contextual variables that are not passed implicitly from either the
+callback or the previous link in the promise chain we want to make use of an
+anonymous arrow function as our default.
+
This comes with the build in disclaimer that if an anonymous function isn't
+required the implementer should heavily consider moving the logic out into its
+own named function expression.
+
The named function is primarily desired due to it's easier to debug nature in
+stacktraces.
Configuration must be passed as props for components. This allows the host
+ system to modify how a component works when it is inserted.
+
All components should be provided with skeleton screens.
+ This ensures that the user interface reflects the final state even when data
+ is loaded asynchronously. This reduces load time frustration.
+
Components should be optimistic.
+ Unless we have reason to believe that an operation may fail we should provide
+ fast response to users.
+
All interface text must be implemented as props for components. This allows
+ the host system to provide a suitable translation/version when using the
+ component.
Components must use and/or provide a default style sheet which at least
+ provides a minimum of styling showing the purpose of the component.
+
Elements must be provided with meaningful classes even though they are not
+ targeted by the default style sheet. This helps host systems provide
+ additional styling of the components. Consider how the component consists of
+ blocks and elements with modifiers and how these can be nested within each
+ other.
+
Components must use SCSS for styling. The project uses PostCSS
+ and PostCSS-SCSS within Webpack for
+ processing.
Components must provide configuration to set a top headline level for the
+ component. This helps provide a proper document outline to ensure the
+ accessibility of the system.
The project uses Yarn as a package
+manager to handle code which is developed outside the project repository. Such
+code must not be committed to the Core project repository.
+
When specifying third party package versions the project follows these
+guidelines:
Components must reuse existing dependencies in the project before adding new
+ones which provide similar functionality. This ensures consistency and avoids
+unnecessary increases in the package size of the project.
+
The reasoning behind the choice of key dependencies have been documented in
+the architecture directory.
The project uses patches rather than forks to modify third party packages. This
+makes maintenance of modified packages easier and avoids a collection of forked
+repositories within the project.
+
+
Use an appropriate method for the corresponding package manager for managing
+ the patch.
+
Patches should be external by default. In rare cases it may be needed to
+ commit them as a part of the project.
+
When providing a patch you must document the origin of the patch e.g. through
+ an url in a commit comment or preferably in the package manager configuration
+ for the project.
Code comments which describe what an implementation does should only be used
+for complex implementations usually consisting of multiple loops, conditional
+statements etc.
+
Inline code comments should focus on why an unusual implementation has been
+implemented the way it is. This may include references to such things as
+business requirements, odd system behavior or browser inconsistencies.
Commit messages in the version control system help all developers understand the
+current state of the code base, how it has evolved and the context of each
+change. This is especially important for a project which is expected to have a
+long lifetime.
+
Commit messages must follow these guidelines:
+
+
Each line must not be more than 72 characters long
+
The first line of your commit message (the subject) must contain a short
+ summary of the change. The subject should be kept around 50 characters long.
+
The subject must be followed by a blank line
+
Subsequent lines (the body) should explain what you have changed and why the
+ change is necessary. This provides context for other developers who have not
+ been part of the development process. The larger the change the more
+ description in the body is expected.
+
If the commit is a result of an issue in a public issue tracker,
+ platform.dandigbib.dk, then the subject must start with the issue number
+ followed by a colon (:). If the commit is a result of a private issue tracker
+ then the issue id must be kept in the commit body.
+
+
When creating a pull request the pull request description should not contain any
+information that is not already available in the commit messages.
The project aims to automate compliance checks as much as possible using static
+code analysis tools. This should make it easier for developers to check
+contributions before submitting them for review and thus make the review process
+easier.
In general all tools must be able to run locally. This allows developers to get
+quick feedback on their work.
+
Tools which provide automated fixes are preferred. This reduces the burden of
+keeping code compliant for developers.
+
Code which is to be exempt from these standards must be marked accordingly in
+the codebase - usually through inline comments (Eslint,
+Stylelint). This must also
+include a human readable reasoning. This ensures that deviations do not affect
+future analysis and the project should always pass through static analysis.
+
If there are discrepancies between the automated checks and the standards
+defined here then developers are encouraged to point this out so the automated
+checks or these standards can be updated accordingly.
After quite a lot of bad experiences with unstable tests
+and reading both the official documentation
+and articles about the best practices we have ended up with a recommendation of
+how to write the tests.
+
According to this article
+it is important to distinguish between commands and assertions.
+Commands are used in the beginning of a statement and yields a chainable element
+that can be followed by one or more assertions in the end.
+
So first we target an element.
+Next we can make one or more assertions on the element.
+
We have created some helper commands for targeting an element:
+getBySel, getBySelLike and getBySelStartEnd.
+They look for elements as advised by the
+Selecting Elements
+section from the Cypress documentation about best practices.
+
Example of a statement:
+
// Targeting.
+cy.getBySel("reservation-success-title-text")
+// Assertion.
+.should("be.visible")
+// Another assertion.
+.and("contain","Material is available and reserved for you!");
+
Error handling is something that is done on multiple levels:
+Eg.: Form validation, network/fetch error handling, runtime errors.
+You could also argue that the fact that the codebase is making use of typescript
+and code generation from the various http services (graphql/REST) belongs to
+the same idiom of error handling in order to make the applications more robust.
Error boundary was introduced
+in React 16 and makes it possible to implement a "catch all" feature
+catching "uncatched" errors and replacing the application with a component
+to the users that something went wrong.
+It is meant ato be a way of having a safety net and always be able to tell
+the end user that something went wrong.
+The apps are being wrapped in the error boundary handling which makes it
+possible to catch thrown errors at runtime.
Async operations and therby also fetch are not being handled out of the box
+by the React Error Boundary. But fortunately react-query, which is being used
+both by the REST services (Orval) and graphql (Graphql Code Generator), has a
+way of addressing the problem. The QueryClient can be configured to trigger
+the Error Boundary system
+if an error is thrown.
+So that is what we are doing.
The reason why *.name is set in the errors is to make it clear which error
+was thrown. If we don't do that the name of the parent class is used in the
+error log. And then it is more difficult to trace where the error originated
+from.
The initial implementation is handling http errors on a coarse level:
+If response.ok is false then throw an error. If the error is critical
+the error boundary is triggered.
+In future version you could could take more things into consideration
+regarding the error:
+
+
Should all status codes trigger an error?
+
Should we have different types of error level depending on request
+and/or http method?
The FBS client adapter is autogenerated base the swagger 1.2 json files from
+FBS. But Orval requires that we use swagger version 2.0 and the adapter has some
+changes in paths and parameters. So some conversion is need at the time of this
+writing.
A repository dpl-fbs-adapter-tool
+tool build around PHP and NodeJS can translate the FBS specifikation into one
+usable for Orval client generator. It also filters out all the FBS calls not
+need by the DPL project.
+
The tool uses go-task to simply the execution of the
+command.
Add 127.0.0.1 dpl-react.docker to your /etc/hosts file
+
Ensure that your node version matches what is specified in package.json.
+
Install dependencies: yarn install
+
Start storybook sudo yarn start:storybook:dev
+
If you need to log in through Adgangsplatformen, you need to change
+ your url to http://dpl-react.docker/ instead of http://localhost. This
+ avoids getting log in errors
Access token must be retrieved from Adgangsplatformen,
+a single sign-on solution for public libraries in Denmark, and OpenPlatform,
+an API for danish libraries.
+
Usage of these systems require a valid client id and secret which must be
+obtained from your library partner or directly from DBC, the company responsible
+for running Adgangsplatformen and OpenPlatform.
+
This project include a client id that matches the storybook setup which can be
+used for development purposes. You can use the /auth story to sign in to
+Adgangsplatformen for the storybook context.
+
(Note: if you enter Adgangsplatformen again after signing it, you will get
+signed out, and need to log in again. This is not a bug, as you stay logged
+in otherwise.)
To test the apps that is indifferent to wether the user is authenticated or not
+it is possible to set a library token via the library component in Storybook.
+Workflow:
For static code analysis we make use of the Airbnb JavaScript Style Guide
+and for formatting we make use of Prettier
+with the default configuration. The above choices have been influenced by a
+multitude of factors:
+
+
Historically Drupal core have been making use of the Airbnb JavaScript Style
+ Guide.
+
Airbnb's standard is comparatively the best known
+ and one of the most used
+ in the JavaScript coding standard landscape.
+
+
This makes future adoption easier for onboarding contributors and support is to
+be expected for a long time.
This project stick to the above guideline as well. If we need to pass a function
+as part of a callback or in a promise chain and we on top of that need to pass
+some contextual variables that is not passed implicit from either the callback
+or the previous link in the promise chain we want to make use of an anonymous
+arrow function as our default.
+
This comes with the build in disclaimer that if an anonymous function isn't
+required the implementer should heavily consider moving the logic out into it's
+own named function expression.
+
The named function is primarily desired due to it's easier to debug nature in
+stacktraces.
// ./src/apps/my-new-application/my-new-application.entry.jsx
+importReactfrom"react";
+importPropTypesfrom"prop-types";
+importMyNewApplicationfrom"./my-new-application";
+
+// The props of an entry is all of the data attributes that were
+// set on the DOM element. See the section on "Naive app mount." for
+// an example.
+exportfunctionMyNewApplicationEntry(props){
+return<MyNewApplicationtext='Might be from a server?'/>;
+}
+
+exportdefaultMyNewApplicationEntry;
+
+
+
+
+
+ 4. Add a story for local development
+
+
// ./src/apps/my-new-application/my-new-application.dev.jsx
+importReactfrom"react";
+importMyNewApplicationEntryfrom"./my-new-application.entry";
+importMyNewApplicationfrom"./my-new-application";
+
+exportdefault{title:"Apps|My new application"};
+
+exportfunctionEntry(){
+// Testing the version that will be shipped.
+return<MyNewApplicationEntry/>;
+}
+
+exportfunctionWithoutData(){
+// Play around with the application itself without server side data.
+return<MyNewApplication/>;
+}
+
+
+
+
+
+ 5. Run the development environment
+
+
yarndev
+
+
+OR depending on your dev environment (docker or not)
+
+
sudoyarndev
+
+
+
+
+
Voila! You browser should have opened and a storybook environment is ready
+for you to tinker around.
initial: Initial state for applications that require some sort of
+initialization, such as making a request to see if a material can be ordered,
+before rendering the order button. Errors in initialization can go directly to
+the failed state, or add custom states for communication different error
+conditions to the user. Should render either nothing or as a
+skeleton/spinner/message.
+
ready: The general "ready state". Applications that doesn't need
+initialization (a generic button for instance) can use ready as the initial
+state set in the useState call. This is basically the main waiting state.
+
processing: The application is taking some action. For buttons this will be
+the state used when the user has clicked the button and the application is
+waiting for reply from the back end. More advanced applications may use it while
+doing backend requests, if reflecting the processing in the UI is desired.
+Applications using optimistic feedback will render this state the same as the
+finished state.
+
failed: Processing failed. The application renders an error message.
+
finished: End state for one-shot actions. Communicates success to the user.
+
Applications can use additional states if desired, but prefer the above if
+appropriate.
// ./src/apps/my-new-application/my-new-application.dev.jsx
+importReactfrom"react";
+importMyNewApplicationEntryfrom"./my-new-application.entry";
+importMyNewApplicationfrom"./my-new-application";
+
+import'./my-new-application.scss';
+
+exportdefault{title:"Apps|My new application"};
+
+exportfunctionEntry(){
+// Testing the version that will be shipped.
+return<MyNewApplicationEntry/>;
+}
+
+exportfunctionWithoutData(){
+// Play around with the application itself without server side data.
+return<MyNewApplication/>;
+}
+
+
+
+
+
Cowabunga! You now got styling in your application
This command installs the latest released version of the package. Whenever a
+new version of the design system package is released, it is necessary
+to reinstall the package in this project using the same command to get the
+newest styling, because yarn adds a specific version number to the package name
+in package.json.
If you need to work with published but unreleased code from a specific branch
+of the design system, you can also use the branch name as the tag for the npm
+package, replacing all special characters with dashes (-).
+
Example: To use the latest styling from a branch in the design system called
+feature/availability-label, run:
If the component is simple enough to be a primitive you would use in multiple
+occasions it's called an 'atom'. Such as a button or a link. If it's more
+specific that that and to be used across apps we just call it a component. An
+example would be some type of media presented alongside a header and some text.
+
The process when creating an atom or a component is more or less similar, but
+some structural differences might be needed.
If you use Code we provide some easy to
+use and nice defaults for this project. They are located in .vscode.example.
+Simply rename the directory from .vscode.example to .vscode and you are good
+to go. This overwrites your global user settings for this workspace and suggests
+som extensions you might want.
So let's say you wanted to make use of an application within an existing HTML
+page such as what might be generated serverside by platforms like Drupal,
+WordPress etc.
+
For this use case you should download the dist.zip package from
+the latest release of the project
+and unzip somewhere within the web root of your project. The package contains a
+set of artifacts needed to use one or more applications within an HTML page.
+
+ HTML Example
+
+A simple example of the required artifacts and how they are used looks like
+this:
+
+
<!DOCTYPE html>
+<htmllang="en">
+<head>
+<metacharset="UTF-8">
+<metaname="viewport"content="width=device-width, initial-scale=1.0">
+<metahttp-equiv="X-UA-Compatible"content="ie=edge">
+<title>Naive mount</title>
+<!-- Include CSS files to provide default styling -->
+<linkrel="stylesheet"href="/dist/components.css">
+</head>
+<body>
+<b>Here be dragons!</b>
+<!-- Data attributes will be camelCased on the react side aka.
+ props.errorText and props.text -->
+<divdata-dpl-app='add-to-checklist'data-text="Chromatic dragon"
+data-error-text="Minor mistake"></div>
+<divdata-dpl-app='a-none-existing-app'></div>
+
+<!-- Load order og scripts is of importance here -->
+<scriptsrc="/dist/runtime.js"></script>
+<scriptsrc="/dist/bundle.js"></script>
+<scriptsrc="/dist/mount.js"></script>
+<!-- After the necessary scripts you can start loading applications -->
+<scriptsrc="/dist/add-to-checklist.js"></script>
+<script>
+// For making successful requests to the different services we need one or
+// more valid tokens.
+window.dplReact.setToken("user","XXXXXXXXXXXXXXXXXXXXXX");
+window.dplReact.setToken("library","YYYYYYYYYYYYYYYYYYYYYY");
+
+// If this function isn't called no apps will display.
+// An app will only be displayed if there is a container for it
+// and a corresponding application loaded.
+window.dplReact.mount(document);
+</script>
+</body>
+</html>
+
+
+
+
+
As a minimum you will need the runtime.js and bundle.js. For styling
+of atoms and components you will need to import components.css.
+
Each application also has its own JavaScript artifact and it might have a CSS
+artifact as well. Such as add-to-checklist.js and add-to-checklist.css.
+
To mount the application you need an HTML element with the correct data
+attribute.
+
<divdata-dpl-app='add-to-checklist'></div>
+
+
The name of the data attribute should be data-dpl-app and the value should be
+the name of the application - the value of the appName parameter assigned in
+the application .mount.js file.
As stated above, every application needs the corresponding data-dpl-app
+attribute to even be mounted and shown on the page. Additional data attributes
+can be passed if necessary. Examples would be contextual ids etc. Normally these
+would be passed in by the serverside platform e.g. Drupal, Wordpress etc.
+
<divdata-dpl-app='add-to-checklist'data-id="870970-basis:54172613"
+data-error-text="A mistake was made"></div>
+
+
The above data-id would be accessed as props.id and data-error-text as
+props.errorText in the entrypoint of an application.
To fake this in our development environment we need to pass these same data
+attributes into our entrypoint.
+
+ Example
+
+
// ./src/apps/my-new-application/my-new-application.dev.jsx
+importReactfrom"react";
+importMyNewApplicationEntryfrom"./my-new-application.entry";
+importMyNewApplicationfrom"./my-new-application";
+
+exportdefault{title:"Apps|My new application"};
+
+exportfunctionEntry(){
+// Testing the version that will be shipped.
+return<MyNewApplicationEntryid="870970-basis:54172613"/>;
+}
+
+exportfunctionWithoutData(){
+// Play around with the application itself without server side data.
+return<MyNewApplication/>;
+}
+
If you want to extend this project - either by introducing new components or
+expand the functionality of the existing ones - and your changes can be
+implemented in a way that is valuable to users in general, please submit pull
+requests.
+
Even if that is not the case and you have special needs the infrastructure of
+the project should also be helpful to you.
+
In such a situation you should fork this project and extend it to your own needs
+by implementing new applications. New applications
+can reuse various levels of infrastructure provided by the project such as:
In order to improve both UX and the performance score you can choose to use
+skeleton screens in situation where you need to fill the interface
+with data from a requests to an external service.
The skeleton screens are being showed instantly in order to deliver
+some content to the end user fast while loading data.
+When the data is arriving the skeleton screens are being replaced
+with the real data.
The skeleton screens are rendered with help from the
+skeleton-screen-css library.
+ By using ssc classes
+ you can easily compose screens
+ that simulate the look of a "real" rendering with real data.
In this example we are showing a search result item as a skeleton screen.
+The skeleton screen consists of a cover, a headline and two lines of text.
+In this case we wanted to maintain the styling of the .card-list-item
+wrapper. And show the skeleton screen elements by using ssc classes.
The main purpose of the functionality is to be able to access strings defined
+at app level inside of sub components
+without passing them all the way down via props.
+You can read more about the decision
+and considerations here.
In order to use the system the component that has the text props
+needs to be wrapped with the withText high order function.
+The texts can hereafter be accessed by using the useText hook.
It is also possible to use placeholders in the text strings.
+They can be handy when you want dynamic values embedded in the text.
+
A classic example is the welcome message to the authenticated user.
+Let's say you have a text with the key: welcomeMessageText.
+The value from the data prop is: Welcome @username, today is @date.
+You would the need to reference it like this:
Sometimes you want two versions of a text be shown
+depending on if you have one or multiple items being referenced in the text.
+
That can be accommodated by using the plural text definition.
+
Let's say that an authenticated user has a list of unread messages in an inbox.
+You could have a text key called: inboxStatusText.
+The value from the data prop is:
+
{"type":"plural","text":["You have 1 message in the inbox",
+"You have @count messages in the inbox"]}.
+
+
You would then need to reference it like this:
+
import * as React from "react";
+import { useText } from "../../core/utils/text";
+
+const InboxStatus: React.FC = () => {
+ const t = useText();
+ const user = getUser();
+ const inboxMessageCount = getUserInboxMessageCount(user);
+
+ const status = t("inboxStatusText", {
+ count: inboxMessageCount,
+ placeholders: {
+ "@count": inboxMessageCount
+ }
+ });
+
+ return (
+ <div>{status}</div>
+ // If count == 1 the texts will be:
+ // "You have 1 message in the inbox"
+
+ // If count == 5 the texts will be:
+ // "You have 5 messages in the inbox"
+ );
+};
+export default InboxStatus;
+
This project exists to consolidate and standardize documentation across DPL
+(Danish Public Libraries) repositories. It's based on mkdocs
+material and uses a handy
+multirepo-plugin to import
+markdown and build documentation from other DPL repositories dynamically.
Any repository you want to add to the documentation site, must have a docs/
+folder in the project root on github and relevant markdown must be moved inside.
+
If you wish to further divide your documentation into subfolders, mkdocs will
+accomodate this and build the site navigation accordingly. Here is an example:
+
docs/
+|README.md# Primary docs/index file for your navigation header
+└───folder1# Left pane navigation category
+││file01.md
+│└───subfolder1# Expandable sub-navigation of folder1
+││file02.md
+││file03.md
+└───folder2# Additional left pane navigation category
+│file04.md
+
It uses a Github workflow to build and publish to github pages. Since
+documentation can be updated independently from other repos, a cron has been
+established to publish the site every 6 hours. While this covers most use cases,
+it's also possible to trigger it adhoc through Github
+actions
+with the workflow_dispatch event trigger.
+
Customizations (colors, font, social links) and general site configuration can
+be configured in
+mkdocs.yml
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/search/search_index.json b/search/search_index.json
new file mode 100644
index 00000000..965d4d0a
--- /dev/null
+++ b/search/search_index.json
@@ -0,0 +1 @@
+{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Documentation Site","text":"
This project exists to consolidate and standardize documentation across DPL (Danish Public Libraries) repositories. It's based on mkdocs material and uses a handy multirepo-plugin to import markdown and build documentation from other DPL repositories dynamically.
Any repository you want to add to the documentation site, must have a docs/ folder in the project root on github and relevant markdown must be moved inside.
If you wish to further divide your documentation into subfolders, mkdocs will accomodate this and build the site navigation accordingly. Here is an example:
docs/\n| README.md # Primary docs/index file for your navigation header\n\u2514\u2500\u2500\u2500folder1 # Left pane navigation category\n\u2502 \u2502 file01.md\n\u2502 \u2514\u2500\u2500\u2500subfolder1 # Expandable sub-navigation of folder1\n\u2502 \u2502 file02.md\n\u2502 \u2502 file03.md\n\u2514\u2500\u2500\u2500folder2 # Additional left pane navigation category\n\u2502 file04.md\n
The docs/ folder is self-contained, meaning that any relative links e.g. \"../../\" linking outside the docs/ directory, should use an absolute URL like: https://github.com/danskernesdigitalebibliotek/dpl-docs/blob/main/README.md
Otherwise, following the link from the documentation site will cause a 404.
It uses a Github workflow to build and publish to github pages. Since documentation can be updated independently from other repos, a cron has been established to publish the site every 6 hours. While this covers most use cases, it's also possible to trigger it adhoc through Github actions with the workflow_dispatch event trigger.
Customizations (colors, font, social links) and general site configuration can be configured in mkdocs.yml
The Markdown-files in this directory document the system as it. Eg. you can read about how to add a new entry to the core configuration.
The ./architecture folder contains our Architectural Decision Records that describes the reasoning behind key architecture decisions. Consult these records if you need background on some part of the CMS, or plan on making any modifications to the architecture.
As for the remaining files and directories
./diagrams contains diagram files like draw.io or PlantUML and rendered diagrams in png/svg format. See the section for details.
./images this is just plain images used by documentation files.
./Taskfile.yml a go-task Taskfile, run task to list available tasks.
We strive to keep the diagrams and illustrations used in the documentation as maintainable as possible. A big part of this is our use of programmatic diagramming via PlantUML and Open Source based manual diagramming via diagrams.net (formerly known as draw.io).
When a change has been made to a *.puml or *.drawio file, you should re-render the diagrams using the command task render and commit the result.
We use the RESTful Web Services and OpenAPI REST Drupal modules to expose endpoints from Drupal as an API to be consumed by external parties.
"},{"location":"dpl-cms/api-development/#howtos","title":"Howtos","text":""},{"location":"dpl-cms/api-development/#create-a-new-endpoint","title":"Create a new endpoint","text":"
Implement a new REST resource plugin by extending Drupal\\rest\\Plugin\\ResourceBase and annotating it with @RestResource
Describe uri_paths, route_parameters and responses in the annotation as detailed as possible to create a strong specification.
Install the REST UI module drush pm-enable restui
Enable and configure the new REST resource. It is important to use the dpl_login_user_token authentication provider for all resources which will be used by the frontend this will provide a library or user token by default.
Inspect the updated OpenAPI specification at /openapi/rest?_format=json to ensure looks as intended
Run task ci:openapi:validate to validate the updated OpenAPI specification
Run task ci:openapi:download to download the updated OpenAPI specification
Uninstall the REST UI module drush pm-uninstall restui
Export the updated configuration drush config-export
Commit your changes including the updated configuration and openapi.json
The site is configured with the Varnish Purge module and configured with a cache-tags based purger that ensures that changes made to the site, is purged from Varnish instantly.
The configuration follows the Lagoon best practices - reference the Lagoon documentation on Varnish for further details.
The following guidelines describe best practices for developing code for the DPL CMS project. The guidelines should help achieve:
A stable, secure and high quality foundation for building and maintaining library websites
Consistency across multiple developers participating in the project
The best possible conditions for sharing modules between DPL CMS websites
The best possible conditions for the individual DPL CMS website to customize configuration and appearance
Contributions to the core DPL CMS project will be reviewed by members of the Core team. These guidelines should inform contributors about what to expect in such a review. If a review comment cannot be traced back to one of these guidelines it indicates that the guidelines should be updated to ensure transparency.
The project follows the Drupal Coding Standards and best practices for all parts of the project: PHP, JavaScript and CSS. This makes the project recognizable for developers with experience from other Drupal projects. All developers are expected to make themselves familiar with these standards.
The following lists significant areas where the project either intentionally expands or deviates from the official standards or areas which developers should be especially aware of.
Code must be compatible with all currently available minor and major versions of PHP from 8.0 and onwards. This is important when trying to ensure smooth updates going forward. Note that this only applies to custom code.
Code must be compatible with Drupal Best Practices as defined by the Drupal Coder module
Code must use types to define function arguments, return values and class properties.
All functionality exposed through JavaScript should use the Drupal JavaScript API and must be attached to the page using Drupal behaviors.
All classes used for selectors in Javascript must be prefixed with js-. Example: <div class=\"gallery js-gallery\"> - .gallery must only be used in CSS, js-gallery must only be used in JS.
Javascript should not affect classes that are not state-classes. State classes such as is-active, has-child or similar are classes that can be used as an interlink between JS and CSS.
Modules and themes should use SCSS (The project uses PostCSS and PostCSS-SCSS). The Core system will ensure that these are compiled to CSS files automatically as a part of the development process.
Class names should follow the Block-Element-Modifier architecture (BEM). This rule does not apply to state classes.
Components (blocks) should be isolated from each other. We aim for an atomic frontend where components should be able to stand alone. In practice, there will be times where this is impossible, or where components can technically stand alone, but will not make sense from a design perspective (e.g. putting a gallery in a sidebar).
Components should be technically isolated by having 1 component per scss file. **As a general rule, you can have a file called gallery.scss which contains .gallery, .gallery__container, .gallery__* and so on. Avoid referencing other components when possible.
All components/mixins/similar must be documented with a code comment. When you create a new component.scss, there must be a comment at the top, describing the purpose of the component.
Avoid using auto-generated Drupal selectors such as .pane-content. Use the Drupal theme system to write custom HTML and use precise, descriptive class names. It is better to have several class names on the same element, rather than reuse the same class name for several components.
All \"magic\" numbers must be documented. If you need to make something e.g. 350 px, you preferably need to find the number using calculating from the context ($layout-width * 0.60) or give it a descriptive variable name ($side-bar-width // 350px works well with the current $layout-width_)
Avoid using the parent selector (.class &). The use of parent selector results in complex deeply nested code which is very hard to maintain. There are times where it makes sense, but for the most part it can and should be avoided.
All modules written specifically for Ding3 must be prefixed with dpl.
The dpl prefix is not required for modules which provide functionality deemed relevant outside the DPL community and are intended for publication on Drupal.org.
The project follows the code structure suggested by the drupal/recommended-project Composer template.
Modules, themes etc. must be placed within the corresponding folder in this repository. If a module developed in relation to this project is of general purpose to the Drupal community it should be placed on Drupal.org and included as an external dependency.
A module must provide all required code and resources for it to work on its own or through dependencies. This includes all configuration, theming, CSS, images and JavaScript libraries.
All default configuration required for a module to function should be implemented using the Drupal configuration system and stored in the version control with the rest of the project source code.
If an existing module is expanded with updates to current functionality the default behavior must be the same as previous versions or as close to this as possible. This also includes new modules which replaces current modules.
If an update does not provide a way to reuse existing content and/or configuration then the decision on whether to include the update resides with the business.
Modules which alter or extend functionality provided by other modules should use appropriate methods for overriding these e.g. by implementing alter hooks or overriding dependencies.
All interface text in modules must be in English. Localization of such texts must be handled using the Drupal translation API.
All interface texts must be provided with a context. This supports separation between the same text used in different contexts. Unless explicitly stated otherwise the module machine name should be used as the context.
"},{"location":"dpl-cms/code-guidelines/#third-party-code","title":"Third party code","text":"
The project uses package managers to handle code which is developed outside of the Core project repository. Such code must not be committed to the Core project repository.
The project uses two package manages for this:
Composer - primarily for managing PHP packages, Drupal modules and other code libraries which are executed at runtime in the production environment.
Yarn - primarily for managing code needed to establish the pipeline for managing frontend assets like linting, preprocessing and optimization of JavaScript, CSS and images.
When specifying third party package versions the project follows these guidelines:
Use the ^ next significant release operator for packages which follow semantic versioning.
The version specified must be the latest known working and secure version. We do not want accidental downgrades.
We want to allow easy updates to all working releases within the same major version.
Packages which are not intended to be executed at runtime in the production environment should be marked as development dependencies.
"},{"location":"dpl-cms/code-guidelines/#altering-third-party-code","title":"Altering third party code","text":"
The project uses patches rather than forks to modify third party packages. This makes maintenance of modified packages easier and avoids a collection of forked repositories within the project.
Use an appropriate method for the corresponding package manager for managing the patch.
Patches should be external by default. In rare cases it may be needed to commit them as a part of the project.
When providing a patch you must document the origin of the patch e.g. through an url in a commit comment or preferably in the package manager configuration for the project.
"},{"location":"dpl-cms/code-guidelines/#error-handling-and-logging","title":"Error handling and logging","text":"
Code may return null or an empty array for empty results but must throw exceptions for signalling errors.
When throwing an exception the exception must include a meaningful error message to make debugging easier. When rethrowing an exception then the original exception must be included to expose the full stack trace.
When handling an exception code must either log the exception and continue execution or (re)throw the exception - not both. This avoids duplicate log content.
Drupal modules must use the Logging API. When logging data the module must use its name as the logging channel and an appropriate logging level.
Modules integrating with third party services must implement a Drupal setting for logging requests and responses and provide a way to enable and disable this at runtime using the administration interface. Sensitive information (such as passwords, CPR-numbers or the like) must be stripped or obfuscated in the logged data.
Code comments which describe what an implementation does should only be used for complex implementations usually consisting of multiple loops, conditional statements etc.
Inline code comments should focus on why an unusual implementation has been implemented the way it is. This may include references to such things as business requirements, odd system behavior or browser inconsistencies.
Commit messages in the version control system help all developers understand the current state of the code base, how it has evolved and the context of each change. This is especially important for a project which is expected to have a long lifetime.
Commit messages must follow these guidelines:
Each line must not be more than 72 characters long
The first line of your commit message (the subject) must contain a short summary of the change. The subject should be kept around 50 characters long.
The subject must be followed by a blank line
Subsequent lines (the body) should explain what you have changed and why the change is necessary. This provides context for other developers who have not been part of the development process. The larger the change the more description in the body is expected.
If the commit is a result of an issue in a public issue tracker, platform.dandigbib.dk, then the subject must start with the issue number followed by a colon (:). If the commit is a result of a private issue tracker then the issue id must be kept in the commit body.
When creating a pull request the pull request description should not contain any information that is not already available in the commit messages.
Developers are encouraged to read How to Write a Git Commit Message by Chris Beams.
The project aims to automate compliance checks as much as possible using static code analysis tools. This should make it easier for developers to check contributions before submitting them for review and thus make the review process easier.
The following tools pay a key part here:
PHP_Codesniffer with the following rulesets:
Drupal Coding Standards as defined the Drupal Coder module
RequireStrictTypesSniff as defined by PHP_Codesniffer
Eslint and Airbnb JavaScript coding standards as defined by Drupal Core
Prettier as defined by Drupal Core
Stylelint with the following rulesets:
As defined by Drupal Core
BEM as defined by the stylelint-bem project
Browsersupport as defined by the stylelint-no-unsupported-browser-features project
PHPStan with the following configuration:
Analysis level 8 to support detection of missing types
Drupal support as defined by the phpstan-drupal project
Detection of deprecated code as defined by the phpstan-deprecation-rules project
In general all tools must be able to run locally. This allows developers to get quick feedback on their work.
Tools which provide automated fixes are preferred. This reduces the burden of keeping code compliant for developers.
Code which is to be exempt from these standards must be marked accordingly in the codebase - usually through inline comments (Eslint, PHP Codesniffer). This must also include a human readable reasoning. This ensures that deviations do not affect future analysis and the Core project should always pass through static analysis.
If there are discrepancies between the automated checks and the standards defined here then developers are encouraged to point this out so the automated checks or these standards can be updated accordingly.
Setting up a new site for testing certain scenarios can be repetitive. To avoid this the project provides a module: DPL Config Import. This module can be used to import configuration changes into the site and install/uninstall modules in a single step.
The configuration changes are described in a YAML file with configuration entry keys and values as well as module ids to install or uninstall.
"},{"location":"dpl-cms/config-import/#how-to-use","title":"How to use","text":"
Download the example file that comes with the module.
Edit it to set the different configuration values.
Upload the file at /admin/config/configuration/import
Clear the cache.
"},{"location":"dpl-cms/config-import/#how-it-is-parsed","title":"How it is parsed","text":"
The yaml file has two root elements configuration and modules.
A basic file looks like this:
configuration:\n# Add keys for configuration entries to set.\n# Values will be merged with existing values.\nsystem.site:\n# Configuration values can be set directly\nslogan: 'Imported by DPL config import'\n# Nested configuration is also supported\npage:\n# All values in nested configuration must have a key. This is required to\n# support numeric configuration keys.\n403: '/user/login'\nmodules:\n# Add module ids to install or uninstall\ninstall:\n- menu_ui\nuninstall:\n- redis\n
There are multiple types of DPL CMS sites all using the same code base:
Developer (In Danish: Programm\u00f8r) sites where the library is entirely free to work with the codebase for DPL CMS as they please for their site
Webmaster sites where the library can install and manage additional modules for their DPL CMS site
Editor (In Danish: Redakt\u00f8r) sites where the library can configure their site based on predefined configuration options provided by DPL CMS
Core sites which are default versions of DPL CMS used for development and testing purposes
All these site types must support the following properties:
It must be possible for system administrators to deploy new versions of DPL CMS which may include changes to the site configuration
It must be possible for libraries to configure their site based on the options provided by their type site. This configuration must not be overridden by new versions of DPL CMS.
This can be split into different types of configuration:
Core configuration: This is the configuration for the base installation of DPL CMS which is shared across all sites. The configuration will be imported on deployment to support central development of the system.
Local configuration: This is the local configuration for the individual site. The level of configuration depends on the site type but no matter the type this configuration must not be overridden on deployment of new versions of DPL CMS.
"},{"location":"dpl-cms/configuration-management/#howtos","title":"Howtos","text":""},{"location":"dpl-cms/configuration-management/#install-a-new-site-from-scratch","title":"Install a new site from scratch","text":"
Run drush site-install --existing-config -y
"},{"location":"dpl-cms/configuration-management/#add-new-core-configuration","title":"Add new core configuration","text":"
Create the relevant configuration through the administration interface
Run drush config-export -y
Append the key for the configuration to config_ignore.settings.ignored_config_entities with the ~ prefix
Commit the new configuration files and the updated config_ignore.settings file
NB: It is important that the official Drupal deployment procedure is followed. Database updates must be executed before configuration is imported. Otherwise we risk ending up in a situation where the configuration contains references to modules which are not enabled.
Creating update hooks for modules is only necessary once we have sites running in production which will not be reinstalled. Until then it is OK to enable/uninstall modules as normal and committing changes to core.extensions.\u00a0\u21a9\u21a9
Determine the UUIDs for the entities which should be exported as default content. The easiest way to do this is to enable the Devel module, view the entity and go to the Devel tab.
Add the UUID (s) (and if necessary entity types) to the dpl_example_content.info.yml file
Export the entities by running drush default-content:export-module dpl_example_content
Commit the new files under web/modules/custom/dpl_example_content
We use the Lagoon application delivery platform to host environments for different stages of the DPL CMS project. Our Lagoon installation is managed by the DPL Platform project.
One such type of environment is pull request environments. These environments are automatically created when a developer creates a pull request with a change against the project and allows developers and project owners to test the result before the change is accepted.
"},{"location":"dpl-cms/lagoon-environments/#howtos","title":"Howtos","text":""},{"location":"dpl-cms/lagoon-environments/#create-an-environment-for-a-pull-request","title":"Create an environment for a pull request","text":"
Create a pull request for the change on GitHub. The pull request must be created from a branch in the same repository as the target branch.
Wait for GitHub Actions related to Lagoon deployment to complete. Note: This deployment process can take a while. Be patient.
A link to the deployed environment is available in the section between pull request activity and Actions
The environment is deleted when the pull request is closed
"},{"location":"dpl-cms/lagoon-environments/#access-the-administration-interface-for-a-pull-request-environment","title":"Access the administration interface for a pull request environment","text":"
Accessing the administration interface for a pull request environment may be needed to test certain functionalities. This can be achieved in two ways:
"},{"location":"dpl-cms/lagoon-environments/#through-the-lagoon-administration-ui","title":"Through the Lagoon administration UI","text":"
Access the administration UI (see below)
Go to the environment corresponding to the pull request number
Go to the Task section for the environment
Select the \"Generate login link [drush uli]\" task and click \"Run task\"
Refresh the page to see the task in the task list and wait a bit
Refresh the page to see the task complete
Go to the task page
The log output contains a one-time login link which can be used to access the administration UI
"},{"location":"dpl-cms/lagoon-environments/#through-the-lagoon-cli","title":"Through the Lagoon CLI","text":"
Run task lagoon:drush:uli
The log output contains a one-time login link which can be used to access the administration UI
"},{"location":"dpl-cms/lagoon-environments/#access-the-lagoon-administration-ui","title":"Access the Lagoon administration UI","text":"
Contact administrators of the DPL Platform Lagoon instance to apply for an user account.
Access the URL for the UI of the instance e.g https://ui.lagoon.dplplat01.dpl.reload.dk/
Log in with your user account (see above)
Go to the dpl-cms project
"},{"location":"dpl-cms/lagoon-environments/#setup-the-lagoon-cli","title":"Setup the Lagoon CLI","text":"
Locate information about the Lagoon instance to use in the DPL Platform documentation
Access the URL for the UI of the instance
Log in with your user account (see above)
Go to the Settings page
Add your SSH public key to your account
Install the Lagoon CLI
Configure the Lagoon CLI to use the instance:
lagoon config add \\\n--lagoon [instance name e.g. \"dpl-platform\"] \\\n--hostname [host to connect to with SSH] \\\n--port [SSH port] \\\n--graphql [url to GraphQL endpoint] \\\n--ui [url to UI] \\\n
Use the DPL Platform as your default Lagoon instance:
lagoon config default --lagoon [instance name]\n
"},{"location":"dpl-cms/lagoon-environments/#using-cron-in-pull-request-environments","title":"Using cron in pull request environments","text":"
The .lagoon.yml has an environments section where it is possible to control various settings. On root level you specify the environment you want to address (eg.: main). And on the sub level of that you can define the cron settings. The cron settings for the main branch looks (in the moment of this writing) like this:
If you want to have cron running on a pull request environment, you have to make a similar block under the environment name of the PR. Example: In case you would have a PR with the number #135 it would look like this:
"},{"location":"dpl-cms/lagoon-environments/#workflow-with-cron-in-pull-request-environments","title":"Workflow with cron in pull request environments","text":"
This way of making sure cronb is running in the PR environments is a bit tedious but it follows the way Lagoon is handling it. A suggested workflow with it could be:
Create PR with code changes as normally
Write the .lagoon.yml configuration block connected to the current PR #
When the PR has been approved you delete the configuration block again
"},{"location":"dpl-cms/local-development/","title":"Local development","text":""},{"location":"dpl-cms/local-development/#copy-database-from-lagoon-environment-to-local-setup","title":"Copy database from Lagoon environment to local setup","text":"
Prerequisites:
Login credentials to the Lagoon UI, or an existing database dump
The following describes how to first fetch a database-dump and then import the dump into a running local environment. Be aware that this only gives you the database, not any files from the site.
To retrieve a database-dump from a running site, consult the \"How do I download a database dump?\" guide in the official Lagoon. Skip this step if you already have a database-dump.
Place the dump in the database-dump directory, be aware that the directory is only allowed to contain a single .sql file.
Start a local environment using task dev:reset
Import the database by running task dev:restore:database
"},{"location":"dpl-cms/local-development/#copy-files-from-lagoon-environment-to-local-setup","title":"Copy files from Lagoon environment to local setup","text":"
Prerequisites:
Login credentials to the Lagoon UI, or an existing nginx files dump
The following describes how to first fetch a files backup package and then replace the files in a local environment.
If you need to get new backup files from the remote site:
Login to the lagoon administration and navigate to the project/environment.
Select the backup tab:
Retrieve the files backup you need:
4. Due to a UI bug you need to RELOAD the window and then it should be possible to download the nginx package.
Replace files locally:
Place the files dump in the files-backup directory, be aware that the directory is only allowed to contain a single .tar.gz file.
Start a local environment using task dev:reset
Restore the files\u0161 by running task dev:restore:files
"},{"location":"dpl-cms/local-development/#get-a-specific-release-of-dpl-react-without-using-composer-install","title":"Get a specific release of dpl-react - without using composer install","text":"
In a development context it is not very handy only to be able to get the latest version of the main branch of dpl-react.
So a command has been implemented that downloads the specific version of the assets and overwrites the existing library.
You need to specify which branch you need to get the assets from. The latest HEAD of the given branch is automatically build by Github actions so you just need to specify the branch you want.
We manage translations as a part of the codebase using .po translation files. Consequently translations must be part of either official or local translations to take effect on the individual site.
DPL CMS is configured to use English as the master language but is configured to use Danish for all users through language negotiation. This allows us to follow a process where English is the default for the codebase but actual usage of the system is in Danish.
To make the \"translation traffic\" work following components are being used:
GitHub
Stores .po files in git with translatable strings and translations
GitHub Actions
Scans codebase for new translatable strings and commits them to GitHub
Notifies POEditor that new translatable strings are available
Publishes .po files to GitHub Pages
POEditor
Provides an interface for translators
Links translations with .po files on GitHub
Provides webhooks where external systems can notify of new translations
DPL CMS
Drupal installation which is configured to use GitHub Pages as an interface translation server from which .po files can be consumed.
The following diagram show how these systems interact to support the flow of from introducing a new translateable string in the codebase to DPL CMS consuming an updated translation with said string.
case
sequenceDiagram\n Actor Translator\n Actor Developer\n Developer ->> Developer: Open pull request with new translatable string\n Developer ->> GitHubActions: Merge pull request into develop\n GitHubActions ->> GitHubActions: Scan codebase and write strings to .po file\n GitHubActions ->> GitHubActions: Fill .po file with existing translations\n GitHubActions ->> GitHub: Commit .po file with updated strings\n GitHubActions ->> Poeditor: Call webhook\n Poeditor ->> GitHub: Fetch updated .po file\n Poeditor ->> Poeditor: Synchronize translations with latest strings and translations\n Translator ->> Poeditor: Translate strings\n Translator ->> Poeditor: Export strings to GitHub\n Poeditor ->> GitHub: Commit .po file with updated translations to develop\n DplCms ->> GitHub: Fetch .po file with latest translations\n DplCms ->> DplCms: Import updated translations
"},{"location":"dpl-cms/translation/#howtos","title":"Howtos","text":""},{"location":"dpl-cms/translation/#add-new-or-update-existing-translation","title":"Add new or update existing translation","text":"
Log into POEditor.com and go to the dpl-cms project
Configuration management for DPL CMS is a complex issue. The complexity stems from different types of DPL CMS sites.
There are two approaches to the problem:
All configuration is local unless explicitly marked as core configuration
All configuration is core unless explicitly marked as local configuration
A solution to configuration management must live up to the following test:
Initialize a local environment to represent a site
Import the provided configuration through site installation using drush site-install --existing-config -y
Log in to see that Core configuration is imported. This can be verified if the site name is set to DPL CMS.
Change a Core configuration value e.g. on http://dpl-cms.docker/admin/config/development/performance
Run drush config-import -y and see that the change is rolled back and the configuration value is back to default. This shows that Core configuration will remain managed by the configuration system.
Change a local configuration value like the site name on http://dpl-cms.docker/admin/config/system/site-information
Run drush config-import -y to see that no configuration is imported. This shows that local configuration which can be managed by Editor libraries will be left unchanged.
Enable and configure the Shortcut module and add a new Shortcut set.
Run drush config-import -y to see that the module is not disabled and the configuration remains unchanged. This shows that local configuration in the form of new modules added by Webmaster libraries will be left unchanged.
We use the Configuration Ignore module to manage configuration.
The module maintains a list of patterns for configuration which will be ignored during the configuration import process. This allows us to avoid updating local configuration.
By adding the wildcard * at the top of this list we choose an approach where all configuration is considered local by default.
Core configuration which should not be ignored can then be added to subsequent lines with the ~ which prefix. On a site these configuration entries will be updated to match what is in the core configuration.
Config Ignore also has the option of ignoring specific values within settings. This is relevant for settings such as system.site where we consider the site name local configuration but 404 page paths core configuration.
The Deconfig module allows developers to mark configuration entries as exempt from import/export. This would allow us to exempt configuration which can be managed by the library.
This does not handle configuration coming from new modules uploaded on webmaster sites. Since we cannot know which configuration entities such modules will provide and Deconfig has no concept of wildcards we cannot exempt the configuration from these modules. Their configuration will be removed again at deployment.
We could use partial imports through drush config-import --partial to not remove configuration which is not present in the configuration filesystem.
We prefer Config Ignore as it provides a single solution to handle the entire problem space.
The Config Ignore Auto module extends the Config Ignore module. Config Ignore Auto registers configuration changes and adds them to an ignore list. This way they are not overridden on future deployments.
The module is based on the assumption that if an user has access to a configuration form they should also be allowed to modify that configuration for their site.
This turns the approach from Config Ignore on its head. All configuration is now considered core until it is changed on the individual site.
We prefer Config Ignore as it only has local configuration which may vary between sites. With Config Ignore Auto we would have local configuration and the configuration of Config Ignore Auto.
Config Ignore Auto also have special handling of the core.extensions configuration which manages the set of installed modules. Since webmaster sites can have additional modules installed we would need workarounds to handle these.
Core developers will have to explicitly select new configuration to not ignore during the development process. One can not simply run drush config-export and have the appropriate configuration not ignored.
Because core.extension is ignored Core developers will have to explicitly enable and uninstall modules through code as a part of the development process.
"},{"location":"dpl-cms/architecture/adr-002-user-handling/","title":"Architecture Decision Record: User Handling","text":""},{"location":"dpl-cms/architecture/adr-002-user-handling/#context","title":"Context","text":"
There are different types of users that are interacticting with the CMS system:
Patrons that is authenticated by logging into Adgangsplatformen.
Editors and administrators (and similar roles) that are are handling content and configuration of the site.
We need to be able to handle that both type of users can be authenticated and authorized in the scope of permissions that are tied to the user type.
We had some discussions wether the Adgangsplatform users should be tied to a Drupal user or not. As we saw it we had two options when a user logs in:
Keep session/access token client side in the browser and not creating a Drupal user.
Create a Drupal user and map the user with the external user.
We ended up with desicion no. 2 mentioned above. So we create a Drupal user upon login if it is not existing already.
We use the OpeOpenID Connect / OAuth client module to manage patron authentication and authorization. And we have developed a plugin for the module called: Adgangsplatformen which connects the external oauth service with dpl-cms.
Editors and administrators a.k.a normal Drupal users and does not require additional handling.
By having a Drupal user tied to the external user we can use that context and make the server side rendering show different content according to the authenticated user.
Adgangsplatform settings have to be configured in the plugin in order to work.
Instead of creating a new user for every single user logging in via Adgangsplatformen you could consider having just one Drupal user for all the external users. That would get rid of the UUID -> Drupal user id mapping that has been implemented as it is now. And it would prevent creation of a lot of users. The decision depends on if it is necessary to distinguish between the different users on a server side level.
The DPL React components needs to be integrated and available for rendering in Drupal. The components are depending on a library token and an access token being set in javascript.
We decided to download the components with composer and integrate them as Drupal libraries.
As described in adr-002-user-handling we are setting an access token in the user session when a user has been through a succesful login at Adgangsplatformen.
We decided that the library token is fetched by a cron job on a regular basis and saved in a KeyValueExpirable store which automatically expires the token when it is outdated.
The library token and the access token are set in javascript on the endpoint: /dpl-react/user.js. By loading the script asynchronically when mounting the components i javascript we are able to complete the rendering.
"},{"location":"dpl-cms/architecture/adr-004-ddb-react-caching/","title":"Architecture Decision Record: Caching of DPL React and other js resources","text":""},{"location":"dpl-cms/architecture/adr-004-ddb-react-caching/#context","title":"Context","text":"
The general caching strategy is defined in another document and this focused on describing the caching strategy of DPL react and other js resources.
We need to have a caching strategy that makes sure that:
The js files defined as Drupal libraries (which DPL react is) and pages that make use of them are being cached.
The same cache is being flushed upon deploy because that is the moment where new versions of DPL React can be introduced.
We have created a purger in the Drupal Varnish/Purge setup that is able to purge everything. The purger is being used in the deploy routine by the command: drush cache:rebuild-external -y
Everything will be invalidated on every deploy. Note: Although we are sending a PURGE request we found out, by studing the vcl of Lagoon, that the PURGE request actually is being translated into a BAN on req.url.
"},{"location":"dpl-cms/architecture/adr-005-api-mocking/","title":"Architecture Decision Record: API mocking","text":""},{"location":"dpl-cms/architecture/adr-005-api-mocking/#context","title":"Context","text":"
DPL CMS integrates with a range of other business systems through APIs. These APIs are called both clientside (from Browsers) and serverside (from Drupal/PHP).
Historically these systems have provided setups accessible from automated testing environments. Two factors make this approach problematic going forward:
In the future not all systems are guaranteed to provide such environments with useful data.
Test systems have not been as stable as is necessary for automated testing. Systems may be down or data updated which cause problems.
To address these problems and help achieve a goal of a high degree of test coverage the project needs a way to decouple from these external APIs during testing.
We use WireMock to mock API calls. Wiremock provides the following feature relevant to the project:
Wiremock is free open source software which can be deployed in development and tests environment using Docker
Wiremock can run in HTTP(S) proxy mode. This allows us to run a single instance and mock requests to all external APIs
We can use the wiremock-php client library to instrument WireMock from PHP code. We modernized the behat-wiremock-extension to instrument with Behat tests which we use for integration testing.
"},{"location":"dpl-cms/architecture/adr-005-api-mocking/#instrumentation-vs-recordreplay","title":"Instrumentation vs. record/replay","text":"
Software for mocking API requests generally provide two approaches:
Instrumentation where an API can be used to define which responses will be returned for what requests programmatically.
Record/replay where requests passing through are persisted (typically to the filesystem) and can be modified and restored at a later point in time.
Generally record/replay makes it easy to setup a lot of mock data quickly. However, it can be hard to maintain these records as it is not obvious what part of the data is important for the test and the relationship between the individual tests and the corresponding data is hard to determine.
Consequently, this project prefers instrumentation.
Instrumentation of Wiremock with PHP is made obsolete with the migration from Behat to Cypress.
"},{"location":"dpl-cms/architecture/adr-006-api-specification/","title":"Architecture Decision Record: API specification","text":""},{"location":"dpl-cms/architecture/adr-006-api-specification/#context","title":"Context","text":"
DPL CMS provides HTTP end points which are consumed by the React components. We want to document these in an established structured format.
Documenting endpoints in a established structured format allows us to use tools to generate client code for these end points. This makes consumption easier and is a practice which is already used with other services in the React components.
Currently these end points expose business logic tied to configuration in the CMS. There might be a future where we also need to expose editorial content through APIs.
We use the RESTful Web Services Drupal module to expose an API from DPL CMS and document the API using the OpenAPI 2.0/Swagger 2.0 specification as supported by the OpenAPI and OpenAPI REST Drupal modules.
This is a technically manageable, standards compliant and performant solution which supports our initial use cases and can be expanded to support future needs.
There are two other approaches to working with APIs and specifications for Drupal:
JSON:API: Drupals JSON:API module provides many features over the REST module when it comes to exposing editorial content (or Drupal entities in general). However it does not work well with other types of functionality which is what we need for our initial use cases.
GraphQL: GraphQL is an approach which does not work well with Drupals HTTP based caching layer. This is important for endpoints which are called many times for each client. Also from version 4.x and beyond the GraphQL Drupal module provides no easy way for us to expose editorial content at a later point in time.
This is an automatically generated API and specification. To avoid other changes leading to unintended changes this we keep the latest version of the specification in VCS and setup automations to ensure that the generated specification matches the inteded one. When developers update the API they have to use the provided tasks to update the stored API accordingly.
OpenAPI and OpenAPI REST are Drupal modules which have not seen updates for a while. We have to apply patches to get them to work for us. Also they do not support the latest version of the OpenAPI specification, 3.0. We risk increased maintenance because of this.
"},{"location":"dpl-cms/architecture/adr-007-cypress-functional-testing/","title":"Architecture Decision Record: Cypress for functional testing","text":""},{"location":"dpl-cms/architecture/adr-007-cypress-functional-testing/#context","title":"Context","text":"
DPL CMS employs functional testing to ensure the functional integrity of the project.
This is currently implemented using Behat which allows developers to instrument a browser navigating through different use cases using Gherkin, a business readable, domain specific language. Behat is used within the project based on experience using it from the previous generation of DPL CMS.
Several factors have caused us to reevaluate that decision:
The Drupal community has introduced Nightwatch for browser testing
Usage of Behat within the Drupal community has stagnated
Developers within this project have questioned the developer experience and future maintenance of Behat
Developers have gained experience using Cypress for browser based testing of the React components
There are other prominent tools which can be used for browser based functional testing:
Playwright: Playwright is a promising tool for browser based testing. It supports many desktop and mobile browsers. It does not have the same widespread usage as Cypress.
Although Cypress supports intercepting requests to external systems this only works for clientside requests. To maintain a consistent approach to mocking both serverside and clientside requests to external systems we integrate Cypress with Wiremock using a similar approach to what we have done with Behat.
There is a community developed module which integrates Drupal with Cypress. We choose not to use this as it provided limited value to our use case and we prefer to avoid increased complexity.
We will not only be able to test on mobile browsers as this is not supported by Cypress. We prefer consistency across projects and expected improved developer efficiency over what we expect to be improved complexity of introducing a tool supporting this natively or expanding Cypress setup to support mobile testing.
We opt not to use Gherkin to describe our test cases. The business has decided that this has not provided sufficient value for the existing project that the additional complexity is not needed. Cypress community plugins support writing tests in Gherkin. These could be used in the future.
"},{"location":"dpl-cms/architecture/adr-008-external-system-integration/","title":"Architecture Decision Record: Integration with external systems","text":""},{"location":"dpl-cms/architecture/adr-008-external-system-integration/#context","title":"Context","text":"
DPL CMS is only intended to integrate with one external system: Adgangsplatformen. This integration is necessary to obtain patron and library tokens needed for authentication with other business systems. All these integrations should occur in the browser through React components.
The purpose of this is to avoid having data passing through the CMS as an intermediary. This way the CMS avoids storing or transmitting sensitive data. It may also improve performance.
In some situations it may be beneficiary to let the CMS access external systems to provide a better experience for business users e.g. by displaying options with understandable names instead of technical ids or validating data before it reaches end users.
Implementing React components to provide administrative controls in the CMS. This would increase the complexity of implementing such controls and cause implementors to not consider improvements to the business user experience.
We allow PHP client code generation for external services. These should not only include APIs to be used with library tokens. This signals what APIs are OK to be accessed server-side.
The CMS must only access services using the library token provided by the dpl_library_token.handler service.
The current translation system for UI strings in DPL CMS is based solely on code deployment of .po files.
However DPL CMS is expected to be deployed in about 100 instances just to cover the Danish Public Library institutions. Making small changes to the UI texts in the codebase would require a new deployment for each of the instances.
Requiring code changes to update translations also makes it difficult for non-technical participants to manage the process themselves. They have to find a suitable tool to edit .po files and then pass the updated files to a developer.
This process could be optimized if:
Translations were provided by a central source
Translations could be managed directly by non-technical users
Distribution of translations is decoupled from deployment
We keep using GitHub as a central source for translation files.
We configure Drupal to consume translations from GitHub. The Drupal translation system already supports runtime updates and consuming translations from a remote source.
We use POEditor to perform translations. POEditor is a translation management tool that supports .po files and integrates with GitHub. To detect new UI strings a GitHub Actions workflow scans the codebase for new strings and notifies POEditor. Here they can be translated by non-technical users. POEditor supports committing translations back to GitHub where they can be consumed by DPL CMS instances.
This approach has a number of benefits apart from addressing the original issue:
POEditor is a specialized tool to manage translations. It supports features such as translation memory, glossaries and machine translation.
POEditor is web-based. Translators avoid having to find and install a suitable tool to edit .po files.
POEditor is software-as-a-service. We do not need to maintain the translation interface ourselves.
POEditor is free for open source projects. This means that we can use it without having to pay for a license.
Code scanning means that new UI strings are automatically detected and available for translation. We do not have to manually synchronize translation files or ensure that UI strings are rendered by the system before they can be translated. This can be complex when working with special cases, error messages etc.
Translations are stored in version control. Managing state is complex and this means that we have easy visibility into changes.
Translations are stored on GitHub. We can move away from POEditor at any time and still have access to all translations.
We reuse existing systems instead of building our own.
A consequence of this approach is that developers have to write code that supports scanning. This is partly supported by the Drupal Code Standards. To support contexts developers also have to include these as a part of the t() function call e.g.
// Good\n$this->t('A string to be translated', [], ['context' => 'The context']);\n$this->t('Another string', [], ['context' => 'The context']);\n// Bad\n$c = ['context' => 'The context']\n$this->t('A string to be translated', [], $c);\n$this->t('Another string', [], $c);\n
We could consider writing a custom sniff or PHPStan rule to enforce this
For covering the functionality of scanning the code we had two potential projects that could solve the case:
Potion
Potx
Both projects can scan the codebase and generate a .po or .pot file with the translation strings and context.
At first it made most sense to go for Potx since it is used by localize.drupal.org and it has a long history. But Potx is extracting strings to a .pot file without having the possibility of filling in the existing translations. So we ended up using Potion which can fill in the existing strings.
A flip side using Potion is that it is not being maintained anymore. But it seems quite stable and a lot of work has been put into it. We could consider to back it ourselves.
Establishing our own localization server. This proved to be very complex. Available solutions are either technically outdated or still under heavy development. None of them have integration with GitHub where our project is located.
Using a separate instance of DPL CMS in Lagoon as a central translation hub. Such an instance would require maintenance and we would have to implement a method for exposing translations to other instances.
DPL Design System is a library of UI components that should be used as a common base system for \"Danmarks Biblioteker\" / \"Det Digitale Folkebibliotek\". The design is implemented with Storybook / React and is output with HTML markup and css-classes through an addon in Storybook.
The codebase follows the naming that designers have used in Figma closely to ensure consistency.
The GitHub NPM package registry requires authentication if you are to access packages there.
Consequently, if you want to use the design system as an NPM package or if you use a project that depends on the design system as an NPM package you must authenticate:
Create a GitHub token with the required scopes: repo and read:packages
Run npm login --registry=https://npm.pkg.github.com
Enter the following information:
> Username: [Your GitHub username]\n> Password: [Your GitHub token]\n> Email: [An email address used with your GitHub account]\n
Note that you will need to reauthenticate when your personal access token expires.
"},{"location":"dpl-design-system/#deployment-and-releases","title":"Deployment and releases","text":"
The project is automatically built and deployed on pushes to every branch and every tag and the result is available as releases which support both types of usage. This applies for the original repository on GitHub and all GitHub forks.
You can follow the status of deployments in the Actions list for the repository on GitHub. The action logs also contain additional details regarding the contents and publication of each release. If using a fork then deployment actions can be seen on the corresponding list.
In general consuming projects should prefer tagged releases as they are stable proper releases.
During development where the design system is being updated in parallel with the implementation of a consuming project it may be advantageous to use a release tagging a branch.
The project automatically creates a release for each branch.
Example: Pushing a commit to a new branch feature/reservation-modal will create the following parts:
A git tag for the commit release-feature/reservation-modal. A tag is needed to create a GitHub release.
A GitHub release for the called feature/reservation-modal. The build is attached here.
A package in the local npm repository tagged feature-reservation-modal. Special characters like / are not supported by npm tags and are converted to -.
Updating the branch will update all parts accordingly.
If your release belongs to a fork you can use aliasing to point to the release of the package in the npm repository for the fork:
npm config set @my-fork:registry=https://npm.pkg.github.com\nnpm install @danskernesdigitalebibliotek/dpl-design-system@npm:@my-fork/dpl-design-system@feature-reservation-modal\n
This will update your package.json and lock files accordingly. Note that branch releases use temporary versions in the format 0.0.0-[GIT-SHA] and you may in practice see these referenced in both files.
If you push new code to the branch you have to update the version used in the consuming project:
Find the release for the branch on the releases page on GitHub and download the dist.zip file from there and use it as needed in the consuming project.
If your branch belongs to a fork then you can find the release on the releases page for the fork.
Repeat the process if you push new code to the branch.
"},{"location":"dpl-design-system/#what-is-storybook","title":"What is Storybook","text":"
Storybook is an open source tool for building UI components and pages in isolation from your app's business logic, data, and context. Storybook helps you document components for reuse and automatically visually test your components to prevent bugs. It promotes the component-driven process and agile development.
It is possible to extend Storybook with an ecosystem of addons that help you do things like fine-tune responsive layouts or verify accessibility.
"},{"location":"dpl-design-system/#how-to-use","title":"How to use","text":"
The Storybook interface is simple and intuitive to use. Browse the project's stories now by navigating to them in the sidebar.
The stories are placed in a flat structure, where developers should not spend time thinking of structure, since we want to keep all parts of the system under a heading called Library. This Library is then dividid in folders where common parts are kept together.
To expose to the user how we think these parts stitch together for example for the new website, we have a heading called Blocks, to resemble what cms blocks a user can expect to find when building pages in the choosen CMS.
This could replicate in to mobile applications, newsletters etc. all pulling parts from the Library.
Each story has a corresponding .stories file. View their code in the src/stories directory to learn how they work. The stories file is used to add the component to the Storybook interface via the title. Start the title with \"Library\" or \"Blocks\" and use / to divide into folders fx. Library / Buttons / Button
Storybook ships with some essential pre-installed addons to power the core Storybook experience.
Controls
Actions
Docs
Viewport
Backgrounds
Toolbars
Measure
Outline
There are many other helpful addons to customise the usage and experience. Additional addons used for this project:
HTML / storybook-addon-html: This addon is used to display compiled HTML markup for each story and make it easier for developers to grab the code. Because we are developing with React, it is necessary to be able to show the HTML markup with the css-classes to make it easier for other developers that will implement it in the future. If a story has controls the HTML markup changes according to the controls that are set.
Designs / storybook-addon-designs: This addon is used to embed Figma in the addon panel for a better design-development workflow.
A11Y: This addon is used to check the accessibility of the components.
All the addons can be found in storybook/main.js directory.
"},{"location":"dpl-design-system/#important-to-notice","title":"Important to notice","text":""},{"location":"dpl-design-system/#internal-classes","title":"Internal classes","text":"
To display some components (fx Colors, Spacing) in a more presentable way, we are using some \"internal\" css-classes which can be found in the styles/internal.scss file. All css-classes with \"internal\" in the front should therefore be ignored in the HTML markup.
We decided to implement skeleton screens when loading data. The skeleton screens are rendered in pure css. The css classes are coming from the library: skeleton-screen-css
The library is very small and based on simple css rules, so we could have considered replicating it in our own design system or make something similar. But by using the open source library we are ensured, to a certain extent, that the code is being maintained, corrected and evolves as time goes by.
We could also have chosen to use images or GIF's to render the screens. But by using the simple toolbox of skeleton-screen-css we should be able to make screens for all the different use cases in the different apps.
It is now possible, with a limited amount of work, to construct skeleton screens in the loading state of the various user interfaces.
Because we use library where skeletons are implemented purely in CSS we also provide a solution which can be consumed in any technology already using the design system without any additional dependencies, client side or server side.
"},{"location":"dpl-design-system/architecture/adr-001-skeleton-screens/#bem-rules-when-using-skeleton-screen-classes-in-dpl-design-system","title":"BEM rules when using Skeleton Screen Classes in dpl-design-system","text":"
Because we want to use existing styling setup in conjunction with the Skeleton Screen Classes we sometimes need to ignore the existing BEM rules that we normally comply to. See eg. the search result styling.
We configure all production backups with a backup schedule that ensure that the site is backed up at least once a day.
Backups executed by the k8up operator follows a backup schedule and then uses Restic to perform the backup itself. The backups are stored in a Azure Blob Container, see the Environment infrastructure for a depiction of its place in the architecture.
The backup schedule and retention is configured via the individual sites .lagoon.yml. The file is re-rendered from a template every time the a site is deployed. The templates for the different site types can be found as a part of dpladm.
Refer to the lagoon documentation on backups for more general information.
Refer to any runbooks relevant to backups for operational instructions on eg. retrieving a backup.
The following guidelines describe best practices for developing code for the DPL Platform project. The guidelines should help achieve:
A stable, secure and high quality foundation for building and maintaining the platform and its infrastructure.
Consistency across multiple developers participating in the project
Contributions to the core DPL Platform project will be reviewed by members of the Core team. These guidelines should inform contributors about what to expect in such a review. If a review comment cannot be traced back to one of these guidelines it indicates that the guidelines should be updated to ensure transparency.
The project follows the Drupal Coding Standards and best practices for all parts of the project: PHP, JavaScript and CSS. This makes the project recognizable for developers with experience from other Drupal projects. All developers are expected to make themselves familiar with these standards.
The following lists significant areas where the project either intentionally expands or deviates from the official standards or areas which developers should be especially aware of.
Code comments which describe what an implementation does should only be used for complex implementations usually consisting of multiple loops, conditional statements etc.
Inline code comments should focus on why an unusual implementation has been implemented the way it is. This may include references to such things as business requirements, odd system behavior or browser inconsistencies.
Commit messages in the version control system help all developers understand the current state of the code base, how it has evolved and the context of each change. This is especially important for a project which is expected to have a long lifetime.
Commit messages must follow these guidelines:
Each line must not be more than 72 characters long
The first line of your commit message (the subject) must contain a short summary of the change. The subject should be kept around 50 characters long.
The subject must be followed by a blank line
Subsequent lines (the body) should explain what you have changed and why the change is necessary. This provides context for other developers who have not been part of the development process. The larger the change the more description in the body is expected.
If the commit is a result of an issue in a public issue tracker, platform.dandigbib.dk, then the subject must start with the issue number followed by a colon (:). If the commit is a result of a private issue tracker then the issue id must be kept in the commit body.
When creating a pull request the pull request description should not contain any information that is not already available in the commit messages.
Developers are encouraged to read How to Write a Git Commit Message by Chris Beams.
The project aims to automate compliance checks as much as possible using static code analysis tools. This should make it easier for developers to check contributions before submitting them for review and thus make the review process easier.
The following tools pay a key part here:
terraform fmt for standard Terraform formatting.
markdownlint-cli2 for linting markdown files. The tool is configured via /.markdownlint-cli2.yaml
ShellCheck with its default configuration.
In general all tools must be able to run locally. This allows developers to get quick feedback on their work.
Tools which provide automated fixes are preferred. This reduces the burden of keeping code compliant for developers.
Code which is to be exempt from these standards must be marked accordingly in the codebase - usually through inline comments (markdownlint, ShellCheck). This must also include a human readable reasoning. This ensures that deviations do not affect future analysis and the Core project should always pass through static analysis.
If there are discrepancies between the automated checks and the standards defined here then developers are encouraged to point this out so the automated checks or these standards can be updated accordingly.
Architecture Decision Records (ADR) describes the reasoning behind key decisions made during the design and implementation of the platforms architecture. These documents stands apart from the remaining documentation in that they keep a historical record, while the rest of the documentation is a snapshot of the current system.
Platform Environment Architecture gives an overview of the parts that makes up a single DPL Platform environment.
Performance strategy Describes the approach the platform takes to meet performance requirements.
The DPL-CMS Drupal sites utilizes a multi-tier caching strategy. HTTP responses are cached by Varnish and Drupal caches its various internal data-structures in a Redis key/value store.
All inbound requests are passed in to an Ingress Nginx controller which forwards the traffic for the individual sites to their individual Varnish instances.
Varnish serves up any static or anonymous responses it has cached from its object-store.
If the request is cache miss the request is passed further on Nginx which serves any requests for static assets.
If the request is for a dynamic page the request is forwarded to the Drupal- installation hosted by PHP-FPM.
Drupal bootstraps, and produces the requested response.
During this process it will either populate or reuse it cache which is stored in Redis.
Depending on the request Drupal will execute a number of queries against MariaDB and a search index.
"},{"location":"dpl-platform/architecture/performance-strategy/#caching-of-http-responses","title":"Caching of http responses","text":"
Varnish will cache any http responses that fulfills the following requirements
Is not associated with a php-session (ie, the user is logged in)
Is a 200
Refer the Lagoon drupal.vcl, docs.lagoon.sh documentation on the Varnish service and the varnish-drupal image for the specifics on the service.
Refer to the caching documentation in dpl-cms for specifics on how DPL-CMS is integrated with Varnish.
"},{"location":"dpl-platform/architecture/performance-strategy/#redis-as-caching-backend","title":"Redis as caching backend","text":"
DPL-CMS is configured to use Redis as the backend for its core cache as an alternative to the default use of the sql-database as backend. This ensures that a busy site does not overload the shared mariadb-server.
A DPL Platform environment consists of a range of infrastructure components on top of which we run a managed Kubernetes instance into with we install a number of software product. One of these is Lagoon which gives us a platform for hosting library sites.
An environment is created in two separate stages. First all required infrastructure resources are provisioned, then a semi-automated deployment process carried out which configures all the various software-components that makes up an environment. Consult the relevant runbooks and the DPL Platform Infrastructure documents for the guides on how to perform the actual installation.
This document describes all the parts that makes up a platform environment raging from the infrastructure to the sites.
Azure Infrastructure describes the raw cloud infrastructure
Software Components describes the base software products we install to support the platform including Lagoon
Sites describes how we define the individual sites on a platform and the approach the platform takes to deployment.
All resources of a Platform environment is contained in a single Azure Resource Group. The resources are provisioned via a Terraform setup that keeps its resources in a separate resource group.
The overview of current platform environments along with the various urls and a summary of its primary configurations can be found the Current Platform environments document.
A platform environment uses the following Azure infrastructure resources.
A virtual Network - with a subnet, configured with access to a number of services.
Separate storage accounts for
Monitoring data (logs)
Lagoon files (eg. results of running user-triggered administrative actions)
Backups
Drupal site files
A MariaDB used to host the sites databases.
A Key Vault that holds administrative credentials to resources that Lagoon needs administrative access to.
An Azure Kubernetes Service cluster that hosts the platform itself.
Two Public IPs: one for ingress one for egress.
The Azure Kubernetes Service in return creates its own resource group that contains a number of resources that are automatically managed by the AKS service. AKS also has a managed control-plane component that is mostly invisible to us. It has a separate managed identity which we need to grant access to any additional infrastructure-resources outside the \"MC\" resource-group that we need AKS to manage.
The Platform consists of a number of software components deployed into the AKS cluster. The components are generally installed via Helm, and their configuration controlled via values-files.
Essential configurations such as the urls for the site can be found in the wiki
The following sections will describe the overall role of the component and how it integrates with other components. For more details on how the component is configured, consult the corresponding values-file for the component found in the individual environments configuration folder.
Lagoon is an Open Soured Platform As A Service created by Amazee. The platform builds on top of a Kubernetes cluster, and provides features such as automated builds and the hosting of a large number of sites.
Kubernetes does not come with an Ingress Controller out of the box. An ingress- controllers job is to accept traffic approaching the cluster, and route it via services to pods that has requested ingress traffic.
We use the widely used Ingress Nginx Ingress controller.
Cert Manager allows an administrator specify a request for a TLS certificate, eg. as a part of an Ingress, and have the request automatically fulfilled.
The platform uses a cert-manager configured to handle certificate requests via Let's Encrypt.
"},{"location":"dpl-platform/architecture/platform-environment-architecture/#prometheus-and-alertmanager","title":"Prometheus and Alertmanager","text":"
Prometheus is a time series database used by the platform to store and index runtime metrics from both the platform itself and the sites running on the platform.
Prometheus is configured to scrape and ingest the following sources
Node Exporter (Kubernetes runtime metrics)
Ingress Nginx
Prometheus is installed via an Operator which amongst other things allows us to configure Prometheus and Alertmanager via ServiceMonitor and AlertmanagerConfig.
Alertmanager handles the delivery of alerts produced by Prometheus.
Grafana provides the graphical user-interface to Prometheus and Loki. It is configured with a number of data sources via its values-file, which connects it to Prometheus and Loki.
"},{"location":"dpl-platform/architecture/platform-environment-architecture/#loki-and-promtail","title":"Loki and Promtail","text":"
Loki stores and indexes logs produced by the pods running in AKS. Promtail streams the logs to Loki, and Loki in turn makes the logs available to the administrator via Grafana.
Each individual library has a Github repository that describes which sites should exist on the platform for the library. The creation of the repository and its contents is automated, and controlled by an entry in a sites.yaml- file shared by all sites on the platform.
Consult the following runbooks to see the procedures for:
sites.yaml is found in infrastructure/environments/<environment>/sites.yaml. The file contains a single map, where the configuration of the individual sites are contained under the property sites.<unique site key>, eg.
yaml sites: # Site objects are indexed by a unique key that must be a valid lagoon, and # github project name. That is, alphanumeric and dashes. core-test1: name: \"Core test 1\" description: \"Core test site no. 1\" # releaseImageRepository and releaseImageName describes where to pull the # container image a release from. releaseImageRepository: ghcr.io/danskernesdigitalebibliotek releaseImageName: dpl-cms-source # Sites can optionally specify primary and secondary domains. primary-domain: core-test.example.com # Fully configured sites will have a deployment key generated by Lagoon. deploy_key: \"ssh-ed25519 <key here>\" bib-ros: name: \"Roskilde Bibliotek\" description: \"Webmaster environment for Roskilde Bibliotek\" primary-domain: \"www.roskildebib.dk\" # The secondary domain will redirect to the primary. secondary-domains: [\"roskildebib.dk\", \"www2.roskildebib.dk\"] # A series of sites that shares the same image source may choose to reuse # properties via anchors << : *default-release-image-source
"},{"location":"dpl-platform/architecture/platform-environment-architecture/#environment-site-git-repositories","title":"Environment Site Git Repositories","text":"
Each platform-site is controlled via a GitHub repository. The repositories are provisioned via Terraform. The following depicts the authorization and control- flow in use:
The configuration of each repository is reconciled each time a site is created,
We loosely follow the guidelines for ADRs described by Michael Nygard.
A record should attempt to capture the situation that led to the need for a discrete choice to be made, and then proceed to describe the core of the decision, its status and the consequences of the decision.
To summaries a ADR could contain the following sections (quoted from the above article):
Title: These documents have names that are short noun phrases. For example, \"ADR 1: Deployment on Ruby on Rails 3.0.10\" or \"ADR 9: LDAP for Multitenant Integration\"
Context: This section describes the forces at play, including technological , political, social, and project local. These forces are probably in tension, and should be called out as such. The language in this section is value-neutral. It is simply describing facts.
Decision: This section describes our response to these forces. It is stated in full sentences, with active voice. \"We will \u2026\"
Status: A decision may be \"proposed\" if the project stakeholders haven't agreed with it yet, or \"accepted\" once it is agreed. If a later ADR changes or reverses a decision, it may be marked as \"deprecated\" or \"superseded\" with a reference to its replacement.
Consequences: This section describes the resulting context, after applying the decision. All consequences should be listed here, not just the \"positive\" ones. A particular decision may have positive, negative, and neutral consequences, but all of them affect the team and project in the future.
The Danish Libraries needed a platform for hosting a large number of Drupal installations. As it was unclear exactly how to build such a platform and how best to fulfill a number of requirements, a Proof Of Concept project was initiated to determine whether to use an existing solution or build a platform from scratch.
The main factors behind the decision to use Lagoon where:
Much lower cost of maintenance than a self-built platform.
The platform is continually updated, and the updates are available for free.
A well-established platform with a lot of proven functionality right out of the box.
The option of professional support by Amazee
When using and integrating with Lagoon we should strive to
Make as little modifications to Lagoon as possible
Whenever possible, use the defaults, recommendations and best practices documented on eg. docs.lagoon.sh
We do this to keep true to the initial thought behind choosing Lagoon as a platform that gives us a lot of functionality for a (comparatively) small investment.
The main alternative that was evaluated was to build a platform from scratch. While this may have lead to a more customized solution that more closely matched any requirements the libraries may have, it also required a very large investment would require a large ongoing investment to keep the platform maintained and updated.
We could also choose to fork Lagoon, and start making heavy modifications to the platform to end up with a solution customized for our needs. The downsides of this approach has already been outlined.
The platform is required to be able to handle an estimated 275.000 page-views per day spread out over 100 websites. A visit to a website that causes the browser to request a single html-document followed by a number of assets is only counted as a single page-view.
On a given day about half of the page-views to be made by an authenticated user. We further more expect the busiest site receive about 12% of the traffic.
Given these numbers, we can make some estimates of the expected average load. To stay fairly conservative we will still assume that about 50% of the traffic is anonymous and can thus be cached by Varnish, but we will assume that all all sites gets traffic as if they where the most busy site on the platform (12%).
12% of 275.000 requests gives us a an average of 33.000 requests. To keep to the conservative side, we concentrate the load to a period of 8 hours. We then end up with roughly 1 page-view pr. second.
The platform is hosted on a Kubernetes cluster on which Lagoon is installed. As of late 2021, Lagoons approach to handling rightsizing of PHP- and in particular Drupal-applications is based on a number factors:
Web workloads are extremely spiky. While a site looks to have to have a sustained load of 5 rps when looking from afar, it will in fact have anything from (eg) 0 to 20 simultaneous users on a given second.
Resource-requirements are every ephemeral. Even though a request as a peak memory-usage of 128MB, it only requires that amount of memory for a very short period of time.
Kubernetes nodes has a limit of how many pods will fit on a given node. This will constraint the scheduler from scheduling too many pods to a node even if the workloads has declared a very low resource request.
With metrics-server enabled, Kubernetes will keep track of the actual available resources on a given node. So, a node with eg. 4GB ram, hosting workloads with a requested resource allocation of 1GB, but actually taking up 3.8GB of ram, will not get scheduled another pod as long as there are other nodes in the cluster that has more resources available.
The consequence of the above is that we can pack a lot more workload onto a single node than what would be expected if you only look at the theoretical maximum resource requirements.
Lagoon sets its resource-requests based on a helm values default in the kubectl-build-deploy-dind image. The default is typically 10Mi pr. container which can be seen in the nginx-php chart which runs a php and nginx container. Lagoon configures php-fpm to allow up to 50 children and allows php to use up to 400Mi memory.
Combining these numbers we can see that a site that is scheduled as if it only uses 20 Megabytes of memory, can in fact take up to 20 Gigabytes. The main thing that keeps this from happening in practice is a a combination of the above assumptions. No node will have more than a limited number of pods, and on a given second, no site will have nearly as many inbound requests as it could have.
Lagoon is a very large and complex solution. Any modification to Lagoon will need to be well tested, and maintained going forward. With this in mind, we should always strive to use Lagoon as it is, unless the alternative is too costly or problematic.
Based on real-live operations feedback from Amazee (creators of Lagoon) and the context outline above we will
Leave the Lagoon defaults as they are, meaning most pods will request 10Mi of memory.
Let the scheduler be informed by runtime metrics instead of up front pod resource requests.
Rely on the node maximum pods to provide some horizontal spread of pods.
We've inspected the process Lagoon uses to deploy workloads, and determined that it would be possible to alter the defaults used without too many modifications.
The build-deploy-docker-compose.sh script that renders the manifests that describes a sites workloads via Helm includes a service-specific values-file. This file can be used to modify the defaults for the Helm chart. By creating custom container-image for the build-process based on the upstream Lagoon build image, we can deliver our own version of this image.
As an example, the following Dockerfile will add a custom values file for the redis service.
FROM docker.io/uselagoon/kubectl-build-deploy-dind:latest\nCOPY redis-values.yaml /kubectl-build-deploy/\n
Given the following redis-values.yaml
resources:\nrequests:\ncpu: 10m\nmemory: 100Mi\n
The Redis deployment would request 100Mi instead of the previous default of 10Mi.
Building upon the modification described in the previous chapter, we could go even further and modify the build-script itself. By inspecting project variables we could have the build-script pass in eg. a configurable value for replicaCount for a pod. This would allow us to introduce a small/medium/large concept for sites. This could be taken even further to eg. introduce whole new services into Lagoon.
This could lead to problems for sites that requires a lot of resources, but given the expected average load, we do not expect this to be a problem even if a site receives an order of magnitude more traffic than the average.
The approach to rightsizing may also be a bad fit if we see a high concentration of \"non-spiky\" workloads. We know for instance that Redis and in particular Varnish is likely to use a close to constant amount of memory. Should a lot of Redis and Varnish pods end up on the same node, evictions are very likely to occur.
The best way to handle these potential situations is to be knowledgeable about how to operate Kubernetes and Lagoon, and to monitor the workloads as they are in use.
"},{"location":"dpl-platform/architecture/adr/adr-003-system-alerts/","title":"ADR-003 System alerts","text":""},{"location":"dpl-platform/architecture/adr/adr-003-system-alerts/#context","title":"Context","text":"
There has been a wish for a functionality that alerts administrators if certain system values have gone beyond defined thresholds rules.
We have tried to install alertmanager and testing it. It works and given the various possibilities of defining alert rules we consider the demands to be fulfilled.
We will be able to get alerts regarding thresholds on both container and cluster level which is what we need.
Alertmanager fits in the general focus of being cloud agnostic. It is CNCF approved and does not have any external infrastructure dependencies.
"},{"location":"dpl-platform/architecture/adr/adr-004-declarative-site-management/","title":"ADR 004: Declarative Site management","text":""},{"location":"dpl-platform/architecture/adr/adr-004-declarative-site-management/#context","title":"Context","text":"
Lagoon requires a site to be deployed from at Git repository containing a .lagoon.yml and docker-compose.yml A potential logical consequence of this is that we require a Git repository pr site we want to deploy, and that we require that repository to maintain those two files.
Administering the creation and maintenance of 100+ Git repositories can not be done manually with risk of inconsistency and errors. The industry best practice for administering large-scale infrastructure is to follow a declarative Infrastructure As Code(IoC) pattern. By keeping the approach declarative it is much easier for automation to reason about the intended state of the system.
Further more, in a standard Lagoon setup, Lagoon is connected to the \"live\" application repository that contains the source-code you wish to deploy. In this approach Lagoon will just deploy whatever the HEAD of a given branch points. In our case, we perform the build of a sites source release separate from deploying which means the sites repository needs to be updated with a release-version whenever we wish it to be updated. This is not a problem for a small number of sites - you can just update the repository directly - but for a large set of sites that you may wish to administer in bulk - keeping track of which version is used where becomes a challenge. This is yet another good case for declarative configuration: Instead of modifying individual repositories by hand to deploy, we would much rather just declare that a set of sites should be on a specific release and then let automation take over.
While there are no authoritative discussion of imperative vs declarative IoC, the following quote from an OVH Tech Blog summarizes the current consensus in the industry pretty well:
In summary declarative infrastructure tools like Terraform and CloudFormation offer a much lower overhead to create powerful infrastructure definitions that can grow to a massive scale with minimal overheads. The complexities of hierarchy, timing, and resource updates are handled by the underlying implementation so you can focus on defining what you want rather than how to do it.
The additional power and control offered by imperative style languages can be a big draw but they also move a lot of the responsibility and effort onto the developer, be careful when choosing to take this approach.
We administer the deployment to and the Lagoon configuration of a library site in a repository pr. library. The repositories are provisioned via Terraform that reads in a central sites.yaml file. The same file is used as input for the automated deployment process which renderers the various files contained in the repository including a reference to which release of DPL-CMS Lagoon should use.
It is still possible to create and maintain sites on Lagoon independent of this approach. We can for instance create a separate project for the dpl-cms repository to support core development.
We could have run each site as a branch of off a single large repository. This was rejected as a possibility as it would have made the administration of access to a given libraries deployed revision hard to control. By using individual repositories we have the option of grating an outside developer access to a full repository without affecting any other.
This directory contains the Infrastructure as Code and scripts that are used for maintaining the infrastructure-component that each platform environment consists of. A \"platform environment\" is an umbrella term for the Azure infrastructure, the Kubernetes cluster, the Lagoon installation and the set of GitHub environments that makes up a single DPL Platform installation.
dpladm/: a tool used for deploying individual sites. The tools can be run manually, but the recommended way is via the common infrastructure Taskfile.
environments/: contains a directory for each platform environment.
terraform: terraform setup and tooling that is shared between environments.
task/: Configuration and scripts used by our Taskfile-based automation The scripts included in this directory can be run by hand in an emergency but te recommended way to invoke these via task.
Taskfile.yml: the common infrastructure task configuration. Invoke task to get a list of targets. Must be run from within an instance of DPL shell unless otherwise noted.
The environments directory contains a subdirectory for each platform environment. You generally interact with the files and directories within the directory to configure the environment. When a modification has been made, it is put in to effect by running the appropiate task :
configuration: contains the various configurations the applications that are installed on top of the infrastructure requires. These are used by the support:provision:* tasks.
env_repos contains the Terraform root-module for provisioning GitHub site- environment repositories. The module is run via the env_repos:provision task.
infrastructure: contains the Terraform root-module used to provision the basic Azure infrastructure components that the platform requires.The module is run via the infra:provision task.
lagoon: contains Kubernetes manifests and Helm values-files used for installing the Lagoon Core and Remote that is at the heart of a DPL Platform installation. THe module is run via the lagoon:provision:* tasks.
"},{"location":"dpl-platform/infrastructure/#basic-usage-of-dplsh-and-an-environment-configuration","title":"Basic usage of dplsh and an environment configuration","text":"
The remaining guides in this document assumes that you work from an instance of the DPL shell. See the DPLSH Runbook for a basic introduction to how to use dplsh.
"},{"location":"dpl-platform/infrastructure/#installing-a-platform-environment-from-scratch","title":"Installing a platform environment from scratch","text":"
The following describes how to set up a whole new platform environment to host platform sites.
The easiest way to set up a new environment is to create a new environments/<name> directory and copy the contents of an existing environment replacing any references to the previous environment with a new value corresponding to the new environment. Take note of the various URLs, and make sure to update the Current Platform environments documentation.
If this is the very first environment, remember to first initialize the Terraform- setup, see the terraform README.md.
When you have prepared the environment directory, launch dplsh and go through the following steps to provision the infrastructure:
# We export the variable to simplify the example, you can also specify it inline.\nexport DPLPLAT_ENV=dplplat01\n\n# Provision the Azure resources\ntask infra:provision\n\n# Create DNS record\nCreate an A record in the administration area of your DNS provider.\nTake the terraform output: \"ingress_ip\" of the former command and create an entry\nlike: \"*.[DOMAN_NAME].[TLD]\": \"[ingress_ip]\"\n# Provision the support software that the Platform relies on\ntask support:provision\n
"},{"location":"dpl-platform/infrastructure/#installing-and-configuring-lagoon","title":"Installing and configuring Lagoon","text":"
The previous step has established the raw infrastructure and the Kubernetes support projects that Lagoon needs to function. You can proceed to follow the official Lagoon installation procedure.
The execution of the individual steps of the guide has been somewhat automated, the following describes how to use the automation, make sure to follow along in the official documentation to understand the steps and some of the additional actions you have to take.
# The following must be carried out from within dplsh, launched as described\n# in the previous step including the definition of DPLPLAT_ENV.\n# 1. Provision a lagoon core into the cluster.\ntask lagoon:provision:core\n\n# 2. Skip the steps in the documentation that speaks about setting up email, as\n# we currently do not support sending emails.\n# 3. Setup ssh-keys for the lagoonadmin user\n# Access the Lagoon UI (consult the platform-environments.md for the url) and\n# log in with lagoonadmin + the admin password that can be extracted from a\n# Kubernetes secret:\nkubectl \\\n-o jsonpath=\"{.data.KEYCLOAK_LAGOON_ADMIN_PASSWORD}\" \\\n-n lagoon-core \\\nget secret lagoon-core-keycloak \\\n| base64 --decode\n\n# Then go to settings and add the ssh-keys that should be able to access the\n# lagoon admin user. Consider keeping this list short, and instead add\n# additional users with fewer privileges laster.\n# 4. If your ssh-key is passphrase-projected we'll need to setup an ssh-agent\n# instance:\n$ eval $(ssh-agent); ssh-add\n\n# 5. Configure the CLI to verify that access (the cli itself has already been\n# installed in dplsh)\ntask lagoon:cli:config\n\n# You can now add additional users, this step is currently skipped.\n# (6. Install Harbor.)\n# This step has already been performed as a part of the installation of\n# support software.\n# 7. Install a Lagoon Remote into the cluster\ntask lagoon:provision:remote\n\n# 8. Register the cluster administered by the Remote with Lagoon Core\n# Notice that you must provide a bearer token via the USER_TOKEN environment-\n# variable. The token can be found in $HOME/.lagoon.yml after a successful\n# \"lagoon login\"\nUSER_TOKEN=<token> task lagoon:add:cluster:\n
The Lagoon core has now been installed, and the remote registered with it.
"},{"location":"dpl-platform/infrastructure/#setting-up-a-github-organization-and-repositories-for-a-new-platform-environment","title":"Setting up a GitHub organization and repositories for a new platform environment","text":"
Prerequisites:
An properly authenticated azure CLI (az). See the section on initial Terraform setup for more details on the requirements
First create a new administrative github user and create a new organization with the user. The administrative user should only be used for administering the organization via terraform and its credentials kept as safe as possible! The accounts password can be used as a last resort for gaining access to the account and will not be stored in Key Vault. Thus, make sure to store the password somewhere safe, eg. in a password-manager or as a physical printout.
This requires the infrastructure to have been created as we're going to store credentials into the azure Key Vault.
# cd into the infrastructure folder and launch a shell\n(host)$ cd infrastructure\n(host)$ dplsh\n\n# Remaining commands are run from within dplsh\n# export the platform environment name.\n# export DPLPLAT_ENV=<name>, eg\n$ export DPLPLAT_ENV=dplplat01\n\n# 1. Create a ssh keypair for the user, eg by running\n# ssh-keygen -t ed25519 -C \"<comment>\" -f dplplatinfra01_id_ed25519\n# eg.\n$ ssh-keygen -t ed25519 -C \"dplplatinfra@0120211014073225\" -f dplplatinfra01_id_ed25519\n\n# 2. Then access github and add the public-part of the key to the account\n# 3. Add the key to keyvault under the key name \"github-infra-admin-ssh-key\"\n# eg.\n$ SECRET_KEY=github-infra-admin-ssh-key SECRET_VALUE=$(cat dplplatinfra01_id_ed25519)\\\ntask infra:keyvault:secret:set\n\n# 4. Access GitHub again, and generate a Personal Access Token for the account.\n# The token should\n# - be named after the platform environment (eg. dplplat01-terraform-timestamp)\n# - Have a fairly long expiration - do remember to renew it\n# - Have the following permissions: admin:org, delete_repo, repo\n# 5. Add the access token to Key Vault under the name \"github-infra-admin-pat\"\n# eg.\n$ SECRET_KEY=github-infra-admin-pat SECRET_VALUE=githubtokengoeshere task infra:keyvault:secret:set\n\n# Our tooling can now administer the GitHub organization\n
"},{"location":"dpl-platform/infrastructure/#renewing-the-administrative-github-personal-access-token","title":"Renewing the administrative GitHub Personal Access Token","text":"
The Personal Access Token we use for impersonating the administrative GitHub user needs to be recreated periodically:
# cd into the infrastructure folder and launch a shell\n(host)$ cd infrastructure\n(host)$ dplsh\n\n# Remaining commands are run from within dplsh\n# export the platform environment name.\n# export DPLPLAT_ENV=<name>, eg\n$ export DPLPLAT_ENV=dplplat01\n\n# 1. Access GitHub, and generate a Personal Access Token for the account.\n# The token should\n# - be named after the platform environment (eg. dplplat01-terraform)\n# - Have a fairly long expiration - do remember to renew it\n# - Have the following permissions: admin:org, delete_repo, repo\n# 2. Add the access token to Key Vault under the name \"github-infra-admin-pat\"\n# eg.\n$ SECRET_KEY=github-infra-admin-pat SECRET_VALUE=githubtokengoeshere \\\ntask infra:keyvault:secret:set\n\n# 3. Delete the previous token\n
The setup keeps a single terraform-state pr. environment. Each state is kept as separate blobs in a Azure Storage Account.
Access to the storage account is granted via a Storage Account Key which is kept in a Azure Key Vault in the same resource-group. The key vault, storage account and the resource-group that contains these resources are the only resources that are not provisioned via Terraform.
"},{"location":"dpl-platform/infrastructure/terraform/#initial-setup-of-terraform","title":"Initial setup of Terraform","text":"
The following procedure must be carried out before the first environment can be created.
Prerequisites:
A Azure subscription
An authenticated azure CLI that is allowed to use create resources and grant access to these resources under the subscription including Key Vaults. The easiest way to achieve this is to grant the user the Owner and Key Vault Administrator roles to on subscription.
Use the scripts/bootstrap-tf.sh for bootstrapping. After the script has been run successfully it outputs instructions for how to set up a terraform module that uses the newly created storage-account for state-tracking.
As a final step you must grant any administrative users that are to use the setup permission to read from the created key vault.
The setup uses an integration with DNSimple to set a domain name when the environments ingress ip has been provisioned. To use this integration first obtain a api-key for the DNSimple account. Then use scripts/add-dnsimple-apikey.sh to write it to the setups Key Vault and finally add the following section to .dplsh.profile ( get the subscription id and key vault name from existing export for ARM_ACCESS_KEY).
export DNSIMPLE_TOKEN=$(az keyvault secret show --subscription \"<subscriptionid>\"\\\n--name dnsimple-api-key --vault-name <key vault-name> --query value -o tsv)\nexport DNSIMPLE_ACCOUNT=\"<dnsimple-account-id>\"\n
The dpl-platform-environment Terraform module provisions all resources that are required for a single DPL Platform Environment.
Inspect variables.tf for a description of the required module-variables.
Inspect outputs.tf for a list of outputs.
Inspect the individual module files for documentation of the resources.
The following diagram depicts (amongst other things) the provisioned resources. Consult the platform environment documentation for more details on the role the various resources plays.
"},{"location":"dpl-platform/infrastructure/terraform/#dpl-platform-site-environment-module","title":"DPL Platform Site Environment Module","text":"
The dpl-platform-env-repos Terraform module provisions the GitHub Git repositories that the platform uses to integrate with Lagoon. Each site hosted on the platform has a registry.
Inspect variables.tf for a description of the required module-variables.
Inspect outputs.tf for a list of outputs.
Inspect the individual module files for documentation of the resources.
The following diagram depicts how the module gets its credentials for accessing GitHub and what it provisions.
This directory contains our operational runbooks for standard procedures you may need to carry out while maintaining and operating a DPL Platform environment.
Most runbooks has the following layout.
Title - Short title that follows the name of the markdown file for quick lookup.
When to use - Outlines when this runbook should be used.
Prerequisites - Any requirements that should be met before the procedure is followed.
Procedure - Stepwise description of the procedure, sometimes these will be whole subheadings, sometimes just a single section with lists.
The runbooks should focus on the \"How\", and avoid explaining any.
"},{"location":"dpl-platform/runbooks/access-kubernetes/","title":"Access Kubernetes","text":""},{"location":"dpl-platform/runbooks/access-kubernetes/#when-to-use","title":"When to use","text":"
When you need to gain kubectl access to a platform-environments Kubernetes cluster.
An authenticated az cli (from the host). This likely means az login --tenant TENANT_ID, where the tenant id is that of \"DPL Platform\". See Azure Portal > Tenant Properties. The logged in user must have permissions to list cluster credentials.
docker cli which is authenticated against the GitHub Container Registry. The access token used must have the read:packages scope.
Set the platform envionment, eg. for \"dplplat01\": export DPLPLAT_ENV=dplplat01
Authenticate: task cluster:auth
Your dplsh session should now be authenticated against the cluster.
"},{"location":"dpl-platform/runbooks/add-generic-site-to-platform/","title":"Add a generic site to the platform","text":""},{"location":"dpl-platform/runbooks/add-generic-site-to-platform/#when-to-use","title":"When to use","text":"
When you want to add a \"generic\" site to the platform. By Generic we mean a site stored in a repository that that is Prepared for Lagoon and contains a .lagoon.yml at its root.
The current main example of such as site is dpl-cms which is used to develop the shared DPL install profile.
The following describes a semi-automated version of \"Add a Project\" in the official documentation.
# From within dplsh:\n# Set an environment,\n# export DPLPLAT_ENV=<platform environment name>\n# eg.\n$ export DPLPLAT_ENV=dplplat01\n\n# If your ssh-key is passphrase-projected we'll need to setup an ssh-agent\n# instance:\n$ eval $(ssh-agent); ssh-add\n\n# 1. Authenticate against the cluster and lagoon\n$ task cluster:auth\n$ task lagoon:cli:config\n\n# 2. Add a project\n# PROJECT_NAME=<project name> GIT_URL=<url> task lagoon:project:add\n$ PROJECT_NAME=dpl-cms GIT_URL=git@github.com:danskernesdigitalebibliotek/dpl-cms.git\\\ntask lagoon:project:add\n\n# 2.b You can also run lagoon add project manually, consult the documentation linked\n# in the beginning of this section for details.\n# 3. Deployment key\n# The project is added, and a deployment key is printed. Copy it and configure\n# the GitHub repository. See the official documentation for examples.\n# 4. Webhook\n# Configure Github to post events to Lagoons webhook url.\n# The webhook url for the environment will be\n# https://webhookhandler.lagoon.<environment>.dpl.reload.dk\n# eg for the environment dplplat01\n# https://webhookhandler.lagoon.dplplat01.dpl.reload.dk\n#\n# Referer to the official documentation linked above for an example on how to\n# set up webhooks in github.\n# 5. Configure image registry credentials Lagoon should use for the project\n# IF your project references private images in repositories that requires\n# authentication\n# Refresh your Lagoon token.\n$ lagoon login\n\n# Then export a github personal access-token with pull access.\n# We could pass this to task directly like the rest of the variables but we\n# opt for the export to keep the execution of task a bit shorter.\n$ export VARIABLE_VALUE=<github pat>\n\n# Then get the project id by listing your projects\n$ lagoon list projects\n\n# Finally, add the credentials\n$ VARIABLE_TYPE_ID=<project id> \\\nVARIABLE_TYPE=PROJECT \\\nVARIABLE_SCOPE=CONTAINER_REGISTRY \\\nVARIABLE_NAME=GITHUB_REGISTRY_CREDENTIALS \\\ntask lagoon:set:environment-variable\n\n# If you get a \"Invalid Auth Token\" your token has probably expired, generated a\n# new with \"lagoon login\" and try again.\n# 5. Trigger a deployment manually, this will fail as the repository is empty\n# but will serve to prepare Lagoon for future deployments.\n# lagoon deploy branch -p <project-name> -b <branch>\n$ lagoon deploy branch -p dpl-cms -b main\n
"},{"location":"dpl-platform/runbooks/add-library-site-to-platform/","title":"Add a new library site to the platform","text":""},{"location":"dpl-platform/runbooks/add-library-site-to-platform/#when-to-use","title":"When to use","text":"
When you want to add a new core-test, editor, webmaster or programmer dpl-cms site to the platform.
For now specify an unique site key (its key in the map of sites), name and description. Leave out the deployment-key, you will add it in a later step.
Sample entry (beware that this example be out of sync with the environment you are operating, so make sure to compare it with existing entries from the environment)
The last entry merges in a default set of properties for the source of release- images. If the site is on the \"programmer\" plan, specify a custom set of properties like so:
sites:\nbib-rb:\nname: \"Roskilde Bibliotek\"\ndescription: \"Roskilde Bibliotek\"\nprimary-domain: \"www.roskildebib.dk\"\nsecondary-domains: [\"roskildebib.dk\"]\ndpl-cms-release: \"1.2.3\"\n# Github package registry used as an example here, but any registry will\n# work.\nreleaseImageRepository: ghcr.io/some-github-org\nreleaseImageName: some-image-name\n
Be aware that the referenced images needs to be publicly available as Lagoon currently only authenticates against ghcr.io.
Then continue to provision the a Github repository for the site.
"},{"location":"dpl-platform/runbooks/add-library-site-to-platform/#step-2-provision-a-github-repository","title":"Step 2: Provision a Github repository","text":"
Run task env_repos:provision to create the repository.
"},{"location":"dpl-platform/runbooks/add-library-site-to-platform/#create-a-lagoon-project-and-connect-the-github-repository","title":"Create a Lagoon project and connect the GitHub repository","text":"
Prerequisites:
A Lagoon account on the Lagoon core with your ssh-key associated
The git-url for the sites environment repository
A personal access-token that is allowed to pull images from the image-registry that hosts our images.
The platform environment name (Consult the platform environment documentation)
The following describes a semi-automated version of \"Add a Project\" in the official documentation.
# From within dplsh:\n# Set an environment,\n# export DPLPLAT_ENV=<platform environment name>\n# eg.\n$ export DPLPLAT_ENV=dplplat01\n\n# If your ssh-key is passphrase-projected we'll need to setup an ssh-agent\n# instance:\n$ eval $(ssh-agent); ssh-add\n\n# 1. Authenticate against the cluster and lagoon\n$ task cluster:auth\n$ task lagoon:cli:config\n\n# 2. Add a project\n# PROJECT_NAME=<project name> GIT_URL=<url> task lagoon:project:add\n$ PROJECT_NAME=core-test1 GIT_URL=git@github.com:danishpubliclibraries/env-core-test1.git\\\ntask lagoon:project:add\n\n# The project is added, and a deployment key is printed, use it for the next step.\n# 3. Add the deployment key to sites.yaml under the key \"deploy_key\".\n$ vi environments/${DPLPLAT_ENV}/sites.yaml\n# Then update the repositories using Terraform\n$ task env_repos:provision\n\n# 4. Configure image registry credentials Lagoon should use for the project:\n# Refresh your Lagoon token.\n$ lagoon login\n\n# Then export a github personal access-token with pull access.\n# We could pass this to task directly like the rest of the variables but we\n# opt for the export to keep the execution of task a bit shorter.\n$ export VARIABLE_VALUE=<github pat>\n\n# Then get the project id by listing your projects\n$ lagoon list projects\n\n# Finally, add the credentials\n$ VARIABLE_TYPE_ID=<project id> \\\nVARIABLE_TYPE=PROJECT \\\nVARIABLE_SCOPE=CONTAINER_REGISTRY \\\nVARIABLE_NAME=GITHUB_REGISTRY_CREDENTIALS \\\ntask lagoon:set:environment-variable\n\n# If you get a \"Invalid Auth Token\" your token has probably expired, generated a\n# new with \"lagoon login\" and try again.\n# 5. Trigger a deployment manually, this will fail as the repository is empty\n# but will serve to prepare Lagoon for future deployments.\n# lagoon deploy branch -p <project-name> -b <branch>\n$ lagoon deploy branch -p core-test1 -b main\n
If you want to deploy a release to the site, continue to Deploying a release.
"},{"location":"dpl-platform/runbooks/connecting-the-lagoon-cli/","title":"Connecting the Lagoon CLI","text":""},{"location":"dpl-platform/runbooks/connecting-the-lagoon-cli/#when-to-use","title":"When to use","text":"
When you want to use the Lagoon API vha. the CLI. You can connect from the DPL Shell, or from a local installation of the CLI.
Using the DPL Shell requires administrative privileges to the infrastructure while a local cli may connect using only the ssh-key associated to a Lagoon user.
This runbook documents both cases, as well as how an administrator can extract the basic connection details a standard user needs to connect to the Lagoon installation.
Your ssh-key associated with a lagoon user. This has to be done via the Lagoon UI by either you for your personal account, or by an administrator who has access to edit your Lagoon account.
For local installations of the cli:
The Lagoon CLI installed locally
Connectivity details for the Lagoon environment
For administrative access to extract connection details or use the lagoon cli from within the dpl shell:
A valid dplsh setup to extract the connectivity details
"},{"location":"dpl-platform/runbooks/connecting-the-lagoon-cli/#procedure","title":"Procedure","text":""},{"location":"dpl-platform/runbooks/connecting-the-lagoon-cli/#obtain-the-connection-details-for-the-environment","title":"Obtain the connection details for the environment","text":"
You can skip this step and go to Configure your local lagoon cli) if your environment is already in Current Platform environments and you just want to have a local lagoon cli working.
If it is missing, go through the steps below and update the document if you have access, or ask someone who has.
# Launch dplsh.\n$ cd infrastructure\n$ dplsh\n\n# 1. Set an environment,\n# export DPLPLAT_ENV=<platform environment name>\n# eg.\n$ export DPLPLAT_ENV=dplplat01\n\n# 2. Authenticate against AKS, needed by infrastructure and Lagoon tasks\n$ task cluster:auth\n\n# 3. Generate the Lagoon CLI configuration and authenticate\n# The Lagoon CLI is authenticated via ssh-keys. DPLSH will mount your .ssh\n# folder from your homedir, but if your keys are passphrase protected, we need\n# to unlock them.\n$ eval $(ssh-agent); ssh-add\n# Authorize the lagoon cli\n$ task lagoon:cli:config\n\n# List the connection details\n$ lagoon config list\n
"},{"location":"dpl-platform/runbooks/connecting-the-lagoon-cli/#configure-your-local-lagoon-cli","title":"Configure your local lagoon cli","text":"
Get the details in the angle-brackets from Current Platform environments:
# Set the configuration as default.\nlagoon config default --lagoon <Lagoon name>\nlagoon login\nlagoon whoami\n\n# Eg.\nlagoon config default --lagoon dplplat01\nlagoon login\nlagoon whoami\n
"},{"location":"dpl-platform/runbooks/deploy-a-release/","title":"Deploy a dpl-cms release to a site","text":""},{"location":"dpl-platform/runbooks/deploy-a-release/#when-to-use","title":"When to use","text":"
When you wish to roll out a release of DPL-CMS or a fork to a single site.
If you want to deploy to more than one site, simply repeat the procedure for each site.
# 1. Make any changes to the sites entry sites.yml you need.\n# 2. (optional) diff the deployment\nDIFF=1 SITE=<sitename> task site:sync\n# 3a. Synchronize the site, triggering a deployment if the current state differs\n# from the intended state in sites.yaml\nSITE=<sitename> task site:sync\n# 3b. If the state does not differ but you still want to trigger a deployment,\n# specify FORCE=1\nFORCE=1 SITE=<sitename> task site:sync\n
The following diagram outlines the full release-flow starting from dpl-cms (or a fork) and to the release is deployed:
"},{"location":"dpl-platform/runbooks/remove-site-from-platform/","title":"Removing a site from the platform","text":""},{"location":"dpl-platform/runbooks/remove-site-from-platform/#when-to-use","title":"When to use","text":"
When you wish to delete a site and all its data from the platform
Prerequisites:
The platform environment name
An user with administrative access to the environment repository
A lagoon account with your ssh-key associated
The site key (its key in sites.yaml)
An properly authenticated azure CLI (az) that has administrative access to the cluster running the lagoon installation
The procedure consists of the following steps (numbers does not correspond to the numbers in the script below).
Download and archive relevant backups
Remove the project from Lagoon
Delete the projects namespace from kubernetes.
Delete the site from sites.yaml
Delete the sites environment repository
Your first step should be to secure any backups you think might be relevant to archive. Whether this step is necessary depends on the site. Consult the Retrieve and Restore backups runbook for the operational steps.
You are now ready to perform the actual removal of the site.
# Launch dplsh.\n$ cd infrastructure\n$ dplsh\n\n# You are assumed to be inside dplsh from now on.\n# 1. Set an environment,\n# export DPLPLAT_ENV=<platform environment name>\n# eg.\n$ export DPLPLAT_ENV=dplplat01\n\n# 2. Setup access to ssh-keys so that the lagoon cli can authenticate.\n$ eval $(ssh-agent); ssh-add\n\n# 3. Authenticate against lagoon\n$ task lagoon:cli:config\n\n# 4. Delete the project from Lagoon\n# lagoon delete project --project <site machine-name>\n$ lagoon delete project --project core-test1\n\n# 5. Authenticate against kubernetes\n$ task cluster:auth\n\n# 6. List the namespaces\n# Identify all the project namespace with the syntax <sitename>-<branchname>\n# eg \"core-test1-main\" for the main branch for the \"core-test1\" site.\n$ kubectl get ns\n\n# 7. Delete each site namespace\n# kubectl delete ns <namespace>\n# eg.\n$ kubectl delete ns core-test1-main\n\n# 8. Edit sites.yaml, remove the the entry for the site\n$ vi environments/${DPLPLAT_ENV}/sites.yaml\n# Then have Terraform delete the sites repository.\n$ task env_repos:provision\n
"},{"location":"dpl-platform/runbooks/retrieve-restore-backup/","title":"Retrieving backups","text":""},{"location":"dpl-platform/runbooks/retrieve-restore-backup/#when-to-use","title":"When to use","text":"
When you wish to download an automatic backup made by Lagoon, and optionally restore it into an existing site.
While most steps are different for file and database backups, step 1 is close to identical for the two guides.
Be aware that the guide instructs you to copy the backups to /tmp inside the cli pod. Depending on the resources available on the node /tmp may not have enough space in which case you may need to modify the cli deployment to add a temporary volume, or place the backup inside the existing /site/default/files folder.
"},{"location":"dpl-platform/runbooks/retrieve-restore-backup/#step-1-downloading-the-backup","title":"Step 1, downloading the backup","text":"
To download the backup access the Lagoon UI and schedule the retrieval of a backup. To do this,
Log in to the environments Lagoon UI (consult the environment documentation for the url)
Access the sites project
Access the sites environment (\"Main\" for production)
Click on the \"Backups\" tab
Click on the \"Retrieve\" button for the backups you wish to download and/or restore. Use to \"Source\" column to differentiate the types of backups. \"nginx\" are backups of the sites files, while \"mariadb\" are backups of the sites database.
The Buttons changes to \"Downloading...\" when pressed, wait for them to change to \"Download\", then click them again to download the backup
"},{"location":"dpl-platform/runbooks/retrieve-restore-backup/#step-2a-restore-a-database","title":"Step 2a, restore a database","text":"
To restore the database we must first copy the backup into a running cli-pod for a site, and then import the database-dump on top of the running site.
Copy the uncompressed mariadb sql file you want to restore into the dpl-platform/infrastructure folder from which we will launch dplsh
Launch dplsh from the infrastructure folder (instructions) and follow the procedure below:
# 1. Authenticate against the cluster.\n$ task cluster:auth\n\n# 2. List the namespaces to identify the sites namespace\n# The namespace will be on the form <sitename>-<branchname>\n# eg \"bib-rb-main\" for the \"main\" branch for the \"bib-rb\" site.\n$ kubectl get ns\n\n# 3. Export the name of the namespace as SITE_NS\n# eg.\n$ export SITE_NS=bib-rb-main\n\n# 4. Copy the *mariadb.sql file to the CLI pod in the sites namespace\n# eg.\nkubectl cp \\\n-n $SITE_NS \\\n*mariadb.sql \\\n$(kubectl -n $SITE_NS get pod -l app.kubernetes.io/instance=cli -o jsonpath=\"{.items[0].metadata.name}\"):/tmp/database-backup.sql\n\n# 5. Have drush inside the CLI-pod import the database and clear out the backup\nkubectl exec \\\n-n $SITE_NS \\\ndeployment/cli \\\n-- \\\nbash -c \" \\\n echo Verifying file \\\n && test -s /tmp/database-backup.sql \\\n || (echo database-backup.sql is missing or empty && exit 1) \\\n && echo Dropping database \\\n && drush sql-drop -y \\\n && echo Importing backup \\\n && drush sqlc < /tmp/database-backup.sql \\\n && echo Clearing cache \\\n && drush cr \\\n && rm /tmp/database-backup.sql\n \"\n
"},{"location":"dpl-platform/runbooks/retrieve-restore-backup/#step-2b-restore-a-sites-files","title":"Step 2b, restore a sites files","text":"
To restore backed up files into a site we must first copy the backup into a running cli-pod for a site, and then rsync the files on top top of the running site.
Copy tar.gz file into the dpl-platform/infrastructure folder from which we will launch dplsh
Launch dplsh from the infrastructure folder (instructions) and follow the procedure below:
# 1. Authenticate against the cluster.\n$ task cluster:auth\n\n# 2. List the namespaces to identify the sites namespace\n# The namespace will be on the form <sitename>-<branchname>\n# eg \"bib-rb-main\" for the \"main\" branch for the \"bib-rb\" site.\n$ kubectl get ns\n\n# 3. Export the name of the namespace as SITE_NS\n# eg.\n$ export SITE_NS=bib-rb-main\n\n# 4. Copy the files tar-ball into the CLI pod in the sites namespace\n# eg.\nkubectl cp \\\n-n $SITE_NS \\\nbackup*-nginx-*.tar.gz \\\n$(kubectl -n $SITE_NS get pod -l app.kubernetes.io/instance=cli -o jsonpath=\"{.items[0].metadata.name}\"):/tmp/files-backup.tar.gz\n\n# 5. Replace the current files with the backup.\n# The following\n# - Verifies the backup exists\n# - Removes the existing sites/default/files\n# - Un-tars the backup into its new location\n# - Fixes permissions and clears the cache\n# - Removes the backup archive\n#\n# These steps can also be performed one by one if you want to.\nkubectl exec \\\n-n $SITE_NS \\\ndeployment/cli \\\n-- \\\nbash -c \" \\\n echo Verifying file \\\n && test -s /tmp/files-backup.tar.gz \\\n || (echo files-backup.tar.gz is missing or empty && exit 1) \\\n && tar ztf /tmp/files-backup.tar.gz data/nginx &> /dev/null \\\n || (echo could not verify the tar.gz file files-backup.tar && exit 1) \\\n && test -d /app/web/sites/default/files \\\n || (echo Could not find destination /app/web/sites/default/files \\\n && exit 1) \\\n && echo Removing existing sites/default/files \\\n && rm -fr /app/web/sites/default/files \\\n && echo Unpacking backup \\\n && mkdir -p /app/web/sites/default/files \\\n && tar --strip 2 --gzip --extract --file /tmp/files-backup.tar.gz \\\n --directory /app/web/sites/default/files data/nginx \\\n && echo Fixing permissions \\\n && chmod -R 777 /app/web/sites/default/files \\\n && echo Clearing cache \\\n && drush cr \\\n && echo Deleting backup archive \\\n && rm /tmp/files-backup.tar.gz\n \"\n# NOTE: In some situations some files in /app/web/sites/default/files might\n# be locked by running processes. In that situations delete all the files you\n# can from /app/web/sites/default/files manually, and then repeat the step\n# above skipping the removal of /app/web/sites/default/files\n
"},{"location":"dpl-platform/runbooks/retrieve-sites-logs/","title":"Retrieve site logs","text":""},{"location":"dpl-platform/runbooks/retrieve-sites-logs/#when-to-use","title":"When to use","text":"
When you want to inspects the logs produced by as specific site
Access the environments Grafana installation - consult the platform-environments.md for the url.
Select \"Explorer\" in the left-most menu and select the \"Loki\" data source in the top.
Query the logs for the environment by either
Use the Log Browser to pick the namespace for the site. It will follow the pattern <sitename>-<branchname>.
Do a custom LogQL, eg. to fetch all logs from the nginx container for the site \"main\" branch of the \"rdb\" site do query on the form {app=\"nginx-php-persistent\",container=\"nginx\",namespace=\"rdb-main\"}
Eg, for the main branch for the site \"rdb\": {namespace=\"rdb-main\"}
Click \"Inspector\" -> \"Data\" -> \"Download Logs\" to download the log lines.
"},{"location":"dpl-platform/runbooks/run-a-lagoon-task/","title":"Run a Lagoon Task","text":""},{"location":"dpl-platform/runbooks/run-a-lagoon-task/#when-to-use","title":"When to use","text":"
"},{"location":"dpl-platform/runbooks/scale-aks/","title":"Scaling AKS","text":""},{"location":"dpl-platform/runbooks/scale-aks/#when-to-use","title":"When to use","text":"
When the cluster is over or underprovisioned and needs to be scaled.
There are multiple approaches to scaling AKS. We run with the auto-scaler enabled which means that in most cases the thing you want to do is to adjust the max or minimum configuration for the autoscaler.
Adjusting the autoscaler
"},{"location":"dpl-platform/runbooks/scale-aks/#adjusting-the-autoscaler","title":"Adjusting the autoscaler","text":"
Edit the infrastructure configuration for your environment. Eg for dplplat01 edit dpl-platform/dpl-platform/infrastructure/environments/dplplat01/infrastructure/main.tf.
Adjust the *_count_min / *_count_min corrospodning to the node-pool you want to grow/shrink.
Then run infra:provision to have terraform effect the change.
task infra:provision\n
"},{"location":"dpl-platform/runbooks/set-environment-variable/","title":"Set an environment variable for a site","text":""},{"location":"dpl-platform/runbooks/set-environment-variable/#when-to-use","title":"When to use","text":"
When you wish to set an environment variable on a site. The variable can be available for either all sites in the project, or for a specific site in the project.
The variables are safe for holding secrets, and as such can be used both for \"normal\" configuration values, and secrets such as api-keys.
The variabel will be available to all containers in the environment can can be picked up and parsed eg. in Drupals settings.php.
# From within a dplsh session authorized to use your ssh keys:\n# 1. Authenticate against the cluster and lagoon\n$ task cluster:auth\n$ task lagoon:cli:config\n\n# 2. Refresh your Lagoon token.\n$ lagoon login\n\n# 3. Then get the project id by listing your projects\n$ lagoon list projects\n\n# 4. Optionally, if you wish to set a variable for a single environment - eg a\n# pull-request site, list the environments in the project\n$ lagoon list environments -p <project name>\n\n# 5. Finally, set the variable\n# - Set the the type_id to the id of the project or environment depending on\n# whether you want all or a single environment to access the variable\n# - Set scope to VARIABLE_TYPE or ENVIRONMENT depending on the choice above\n# - set\n$ VARIABLE_TYPE_ID=<project or environment id> \\\nVARIABLE_TYPE=<PROJECT or ENVIRONMENT> \\\nVARIABLE_SCOPE=RUNTIME \\\nVARIABLE_NAME=<your variable name> \\\nVARIABLE_VALUE=<your variable value> \\\ntask lagoon:set:environment-variable\n\n# If you get a \"Invalid Auth Token\" your token has probably expired, generated a\n# new with \"lagoon login\" and try again.\n
The variable will be available the next time the site is deployed by Lagoon. Use the deployment runbook to trigger a deployment, use a forced deployment if you do not have a new release to deploy.
"},{"location":"dpl-platform/runbooks/update-upgrade-status/","title":"Update the support workload upgrade status","text":""},{"location":"dpl-platform/runbooks/update-upgrade-status/#when-to-use","title":"When to use","text":"
When you need to update the support workload version sheet.
Run dplsh to extract the current and latest version for all support workloads
# First authenticate against the cluster\ntask cluster:auth\n# Then pull the status\ntask ops:get-versions\n
Then access the version status sheet and update the status from the output.
"},{"location":"dpl-platform/runbooks/upgrading-aks/","title":"Upgrading AKS","text":""},{"location":"dpl-platform/runbooks/upgrading-aks/#when-to-use","title":"When to use","text":"
When you want to upgrade Azure Kubernetes Service to a newer version.
We use Terraform to upgrade AKS. Should you need to do a manual upgrade consult Azures documentation on upgrading a cluster and on upgrading node pools. Be aware in both cases that the Terraform state needs to be brought into sync via some means, so this is not a recommended approach.
"},{"location":"dpl-platform/runbooks/upgrading-aks/#find-out-which-versions-of-kubernetes-an-environment-can-upgrade-to","title":"Find out which versions of kubernetes an environment can upgrade to","text":"
In order to find out which versions of kubernetes we can upgrade to, we need to use the following command:
task cluster:get-upgrades\n
This will output a table of in which the column \"Upgrades\" lists the available upgrades for the highest available minor versions.
A Kubernetes cluster can can at most be upgraded to the nearest minor version, which means you may be in a situation where you have several versions between you and the intended version.
Minor versions can be skipped, and AKS will accept a cluster being upgraded to a version that does not specify a patch version. So if you for instance want to go from 1.20.9 to 1.22.15, you can do 1.21, and then 1.22.15. When upgrading to 1.21 Azure will substitute the version for an the hightest available patch version, e.g. 1.21.14.
You should know know which version(s) you need to upgrade to, and can continue to the actual upgrade.
"},{"location":"dpl-platform/runbooks/upgrading-aks/#ensuring-the-terraform-state-is-in-sync","title":"Ensuring the Terraform state is in sync","text":"
As we will be using Terraform to perform the upgrade we want to make sure it its state is in sync. Execute the following task and resolve any drift:
task infra:provision\n
"},{"location":"dpl-platform/runbooks/upgrading-aks/#upgrade-the-cluster","title":"Upgrade the cluster","text":"
Upgrade the control-plane:
Update the control_plane_version reference in infrastructure/environments/<environment>/infrastructure/main.tf and run task infra:provision to apply. You can skip patch-versions, but you can only do one minor-version at the time
Monitor the upgrade as it progresses. A control-plane upgrade is usually performed in under 5 minutes.
Monitor via eg.
watch -n 5 kubectl version\n
Then upgrade the system, admin and application node-pools in that order one by one.
Update the pool_[name]_version reference in infrastructure/environments/<environment>/infrastructure/main.tf. The same rules applies for the version as with control_plane_version.
Monitor the upgrade as it progresses. Expect the provisioning of and workload scheduling to a single node to take about 5-10 minutes. In particular be aware that the admin node-pool where harbor runs has a tendency to take a long time as the harbor pvcs are slow to migrate to the new node.
Monitor via eg.
watch -n 5 kubectl get nodes\n
"},{"location":"dpl-platform/runbooks/upgrading-lagoon/","title":"Upgrading Lagoon","text":""},{"location":"dpl-platform/runbooks/upgrading-lagoon/#when-to-use","title":"When to use","text":"
When there is a need to upgrade Lagoon to a new patch or minor version.
A running dplsh launched from ./infrastructure with DPLPLAT_ENV set to the platform environment name.
Knowledge about the version of Lagoon you want to upgrade to.
You can extract version (= chart version) and appVersion (= lagoon release version) for the lagoon-remote / lagoon-core charts via the following commands (replace lagoon-core for lagoon-remote if necessary).
Knowledge of any breaking changes or necessary actions that may affect the platform when upgrading. See chart release notes for all intermediate chart releases.
Backup the API and Keycloak dbs as described in the official documentation
Bump the chart version VERSION_LAGOON_CORE in infrastructure/environments/<env>/lagoon/lagoon-versions.env
Perform a helm diff
DIFF=1 task lagoon:provision:core
Perform the actual upgrade
task lagoon:provision:core
Upgrade Lagoon remote
Bump the chart version VERSION_LAGOON_REMOTE in infrastructure/environments/dplplat01/lagoon/lagoon-versions.env
Perform a helm diff
DIFF=1 task lagoon:provision:remote
Perform the actual upgrade
task lagoon:provision:remote
Take note in the output from Helm of any CRD updates that may be required
"},{"location":"dpl-platform/runbooks/upgrading-support-workloads/","title":"Upgrading Support Workloads","text":""},{"location":"dpl-platform/runbooks/upgrading-support-workloads/#when-to-use","title":"When to use","text":"
When you want to upgrade support workloads in the cluster. This includes.
Cert-manager
Grafana
Harbor
Ingress Nginx
K8up
Loki
Minio
Prometheus
Promtail
This document contains general instructions for how to upgrade support workloads, followed by specific instructions for each workload (linked above).
Identify the version you want to bump in the environment/configuration directory eg. for dplplat01 infrastructure/environments/dplplat01/configuration/versions.env. The file contains links to the relevant Artifact Hub pages for the individual projects and can often be used to determine both the latest version, but also details about the chart such as how a specific manifest is used. You can find the latest version of the support workload in the Version status sheet which itself is updated via the procedure described in the Update Upgrade status runbook.
Consult any relevant changelog to determine if the upgrade will require any extra work beside the upgrade itself. To determine which version to look up in the changelog, be aware of the difference between the chart version and the app version. We currently track the chart versions, and not the actual version of the application inside the chart. In order to determine the change in appVersion between chart releases you can do a diff between releases, and keep track of the appVersion property in the charts Chart.yaml. Using using grafana as an example: https://github.com/grafana/helm-charts/compare/grafana-6.55.1...grafana-6.56.0. The exact way to do this differs from chart to chart, and is documented in the Specific producedures and tests below.
Carry out any chart-specific preparations described in the charts update-procedure. This could be upgrading a Custom Resource Definition that the chart does not upgrade.
Identify the relevant task in the main Taskfile for upgrading the workload. For example, for cert-manager, the task is called support:provision:cert-manager and run the task with DIFF=1, eg DIFF=1 task support:provision:cert-manager.
If the diff looks good, run the task without DIFF=1, eg task support:provision:cert-manager.
Then proceeded to perform the verification test for the relevant workload. See the following section for known verification tests.
"},{"location":"dpl-platform/runbooks/upgrading-support-workloads/#specific-producedures-and-tests","title":"Specific producedures and tests","text":"
The project project versions its Helm chart together with the app itself. So, simply use the chart version in the following checks.
Cert Manager keeps Release notes for the individual minor releases of the project. Consult these for every upgrade past a minor version.
As both are versioned in the same repository, simply use the following link for looking up the release notes for a specific patch release, replacing the example tag with the version you wish to upgrade to.
The note will most likely be empty. Now diff the chart version with the current version, again replacing the version with the relevant for your releases.
As the repository contains a lot of charts, you will need to do a bit of digging. Look for at least charts/grafana/Chart.yaml which can tell you the app version.
With the app-version in hand, you can now look at the release notes for the grafana app itself.
Verify that the Grafana pods are all running and healthy.
kubectl get pods --namespace grafana\n
Access the Grafana UI and see if you can log in. If you do not have a user but have access to read secrets in the grafana namespace, you can retrive the admin password with the following command:
# Password for admin\nUI_NAME=grafana task ui-password\n\n# Hostname for grafana\nkubectl -n grafana get -o jsonpath=\"{.spec.rules[0].host}\" ingress grafana ; echo\n
An overview of the chart versions can be retrived from Github. the chart does not have a changelog.
Link for comparing two chart releases: https://github.com/goharbor/harbor-helm/compare/v1.10.1...v1.12.0
Having identified the relevant appVersions, consult the list of Harbor releases to see a description of the changes included in the release in question. If this approach fails you can also use the diff-command described below to determine which image-tags are going to change and thus determine the version delta.
Harbor is a quite active project, so it may make sense mostly to pay attention to minor/major releases and ignore the changes made in patch-releases.
Harbor documents the general upgrade procedure for non-kubernetes upgrades for minor versions on their website. This documentation is of little use to our Kubernetes setup, but it can be useful to consult the page for minor/major version upgrades to see if there are any special considerations to be made.
The Harbor chart repository has upgrade instructions as well. The instructions asks you to do a snapshot of the database and backup the tls secret. Snapshotting the database is currently out of scope, but could be a thing that is considered in the future. The tls secret is handled by cert-manager, and as such does not need to be backed up.
With knowledge of the app version, you can now update versions.env as described in the General Procedure section, diff to see the changes that are going to be applied, and finally do the actual upgrade.
When Harbor seems to be working, you can verify that the UI is working by accessing https://harbor.lagoon.dplplat01.dpl.reload.dk/. The password for the user admin can be retrived with the following command:
UI_NAME=harbor task ui-password\n
If everything looks good, you can consider to deploying a site. One way to do this is to identify an existing site of low importance, and re-deploy it. A re-deploy will require Lagoon to both fetch and push images. Instructions for how to access the lagoon UI is out of scope of this document, but can be found in the runbook for running a lagoon task. In this case you are looking for the \"Deploy\" button on the sites \"Deployments\" tab.
When working with the ingress-nginx chart we have at least 3 versions to keep track off.
The chart version tracks the version of the chart itself. The charts appVersion tracks a controller application which dynamically configures a bundles nginx. The version of nginx used is determined configuration-files in the controller. Amongst others the ingress-nginx.yaml.
Link for diffing two chart versions: https://github.com/kubernetes/ingress-nginx/compare/helm-chart-4.6.0...helm-chart-4.6.1
The project keeps a quite good changelog for the chart
Link for diffing two controller versions: https://github.com/kubernetes/ingress-nginx/compare/controller-v1.7.1...controller-v1.7.0
Consult the individual GitHub releases for descriptions of what has changed in the controller for a given release.
With knowledge of the app version, you can now update versions.env as described in the General Procedure section, diff to see the changes that are going to be applied, and finally do the actual upgrade.
The ingress-controller is very central to the operation of all public accessible parts of the platform. It's area of resposibillity is on the other hand quite narrow, so it is easy to verify that it is working as expected.
First verify that pods are coming up
kubectl -n ingress-nginx get pods\n
Then verify that the ingress-controller is able to serve traffic. This can be done by accessing the UI of one of the apps that are deployed in the platform.
The Loki chart is versioned separatly from Loki. The version of Loki installed by the chart is tracked by its appVersion. So when upgrading, you should always look at the diff between both the chart and app version.
The general upgrade procedure will give you the chart version, access the following link to get the release note for the chart. Remember to insert your version:
Notice that the Loki helm-chart is maintained in the same repository as Loki itself. You can find the diff between the chart versions by comparing two chart release tags.
As the repository contains changes to Loki itself as well, you should seek out the file production/helm/loki/Chart.yaml which contains the appVersion that defines which version of Loki a given chart release installes.
Direct link to the file for a specific tag: https://github.com/grafana/loki/blob/helm-loki-3.3.1/production/helm/loki/Chart.yaml
With the app-version in hand, you can now look at the release notes for Loki to see what has changed between the two appVersions.
Last but not least the Loki project maintains a upgrading guide that can be found here: https://grafana.com/docs/loki/latest/upgrading/
List pods in the loki namespace to see if the upgrade has completed successfully.
kubectl --namespace loki get pods\n
Next verify that Loki is still accessibel from Grafana and collects logs by logging in to Grafana. Then verify the Loki datasource, and search out some logs for a site. See the validation steps for Grafana for instructions on how to access the Grafana UI.
The chart is developed alongside a number of other community driven prometheus- related charts in https://github.com/prometheus-community/helm-charts.
This means that the following comparison between two releases of the chart will also contain changes to a number of other charts. You will have to look for changes in the charts/kube-prometheus-stack/ directory.
The Readme for the chart contains a good Upgrading Chart section that describes things to be aware of when upgrading between specific minor and major versions. The same documentation can also be found on artifact hub.
Consult the section that matches the version you are upgrading from and to. Be aware that upgrades past a minor version often requires a CRD update. The CRDs may have to be applied before you can do the diff and upgrade. Once the CRDs has been applied you are committed to the upgrade as there is no simple way to downgrade the CRDs.
List pods in the prometheus namespace to see if the upgrade has completed successfully. You should expect to see two types of workloads. First a single a single promstack-kube-prometheus-operator pod that runs Prometheus, and then a promstack-prometheus-node-exporter pod for each node in the cluster.
kubectl --namespace prometheus get pods -l \"release=promstack\"\n
As the Prometheus UI is not directly exposed, the easiest way to verify that Prometheus is running is to access the Grafana UI and verify that the dashboards that uses Prometheus are working, or as a minimum that the prometheus datasource passes validation. See the validation steps for Grafana for instructions on how to access the Grafana UI.
The Promtail chart is versioned separatly from Promtail which itself is a part of Loki. The version of Promtail installed by the chart is tracked by its appVersion. So when upgrading, you should always look at the diff between both the chart and app version.
The general upgrade procedure will give you the chart version, access the following link to get the release note for the chart. Remember to insert your version:
The note will most likely be empty. Now diff the chart version with the current version, again replacing the version with the relevant for your releases.
As the repository contains a lot of charts, you will need to do a bit of digging. Look for at least charts/promtail/Chart.yaml which can tell you the app version.
With the app-version in hand, you can now look at the release notes for Loki (which promtail is part of). Look for notes in the Promtail sections of the release notes.
List pods in the promtail namespace to see if the upgrade has completed successfully.
kubectl --namespace promtail get pods\n
With the pods running, you can verify that the logs are being collected seeking out logs via Grafana. See the validation steps for Grafana for details on how to access the Grafana UI.
You can also inspect the logs of the individual pods via
An authorized Azure az cli. The version should match the version found in FROM mcr.microsoft.com/azure-cli:version in the dplsh Dockerfile You can choose to authorize the az cli from within dplsh, but your session will only last as long as the shell-session. The use you authorize as must have permission to read the Terraform state from the Terraform setup , and Contributor permissions on the environments resource-group in order to provision infrastructure.
dplsh.sh symlinked into your path as dplsh, see Launching the Shell (optional, but assumed below)
# Launch dplsh.\n$ cd infrastructure\n$ dplsh\n\n# 1. Set an environment,\n# export DPLPLAT_ENV=<platform environment name>\n# eg.\n$ export DPLPLAT_ENV=dplplat01\n\n# 2a. Authenticate against AKS, needed by infrastructure and Lagoon tasks\n$ task cluster:auth\n\n# 2b - if you want to use the Lagoon CLI)\n# The Lagoon CLI is authenticated via ssh-keys. DPLSH will mount your .ssh\n# folder from your homedir, but if your keys are passphrase protected, we need\n# to unlock them.\n$ eval $(ssh-agent); ssh-add\n# Then authorize the lagoon cli\n$ task lagoon:cli:config\n
Before you can install the project you need to create the file ~/.npmrc to access the GitHub package registry as described using a personal access token. The token must be created with the required scopes: repo and read:packages
If you have npm installed locally this can be achieved by running the following command and using the token when prompted for password.
"},{"location":"dpl-react/#alternative-without-docker","title":"Alternative without Docker","text":"
Add 127.0.0.1 dpl-react.docker to your /etc/hosts file
Ensure that your node version matches what is specified in package.json.
Install dependencies: yarn install
Start storybook sudo yarn start:storybook:dev
If you need to log in through Adgangsplatformen, you need to change your url to http://dpl-react.docker/ instead of http://localhost. This avoids getting log in errors
"},{"location":"dpl-react/#step-debugging-in-visual-studio-code-no-docker","title":"Step Debugging in Visual Studio Code (no docker)","text":"
If you want to enable step debugging you need to:
Copy .vscode.example/launch.json into .vscode/
Mark 1 or more breakpoints on a line in the left gutter on an open file
In the top menu in VS Code choose: Run -> Start Debugging
Access token must be retrieved from Adgangsplatformen, a single sign-on solution for public libraries in Denmark, and OpenPlatform, an API for danish libraries.
Usage of these systems require a valid client id and secret which must be obtained from your library partner or directly from DBC, the company responsible for running Adgangsplatformen and OpenPlatform.
This project include a client id that matches the storybook setup which can be used for development purposes. You can use the /auth story to sign in to Adgangsplatformen for the storybook context.
(Note: if you enter Adgangsplatformen again after signing it, you will get signed out, and need to log in again. This is not a bug, as you stay logged in otherwise.)
To test the apps that is indifferent to wether the user is authenticated or not it is possible to set a library token via the library component in Storybook. Workflow:
Retrieve a library token via OpenPlatform
Insert the library token in the Library Token story in storybook
"},{"location":"dpl-react/#standard-and-style","title":"Standard and style","text":""},{"location":"dpl-react/#javascript-jsx","title":"JavaScript + JSX","text":"
For static code analysis we make use of the Airbnb JavaScript Style Guide and for formatting we make use of Prettier with the default configuration. The above choices have been influenced by a multitude of factors:
Historically Drupal core have been making use of the Airbnb JavaScript Style Guide.
Airbnb's standard is comparatively the best known and one of the most used in the JavaScript coding standard landscape.
This makes future adoption easier for onboarding contributors and support is to be expected for a long time.
"},{"location":"dpl-react/#named-functions-vs-anonymous-arrow-functions","title":"Named functions Vs. Anonymous arrow functions","text":"
AirBnB's only guideline towards this is that anonymous arrow function are preferred over the normal anonymous function notation.
When you must use an anonymous function (as when passing an inline callback), use arrow function notation.
Why? It creates a version of the function that executes in the context of this, which is usually what you want, and is a more concise syntax.
Why not? If you have a fairly complicated function, you might move that logic out into its own named function expression.
Reference
This project stick to the above guideline as well. If we need to pass a function as part of a callback or in a promise chain and we on top of that need to pass some contextual variables that is not passed implicit from either the callback or the previous link in the promise chain we want to make use of an anonymous arrow function as our default.
This comes with the build in disclaimer that if an anonymous function isn't required the implementer should heavily consider moving the logic out into it's own named function expression.
The named function is primarily desired due to it's easier to debug nature in stacktraces.
"},{"location":"dpl-react/#create-a-new-application","title":"Create a new application","text":"1. Create a new application component
// ./src/apps/my-new-application/my-new-application.jsx\nimport React from \"react\";\nimport PropTypes from \"prop-types\";\nexport function MyNewApplication({ text }) {\nreturn (\n<h2>{text}</h2>\n);\n}\nMyNewApplication.defaultProps = {\ntext: \"The fastest man alive!\"\n};\nMyNewApplication.propTypes = {\ntext: PropTypes.string\n};\nexport default MyNewApplication;\n
2. Create the entry component
// ./src/apps/my-new-application/my-new-application.entry.jsx\nimport React from \"react\";\nimport PropTypes from \"prop-types\";\nimport MyNewApplication from \"./my-new-application\";\n// The props of an entry is all of the data attributes that were\n// set on the DOM element. See the section on \"Naive app mount.\" for\n// an example.\nexport function MyNewApplicationEntry(props) {\nreturn <MyNewApplication text='Might be from a server?' />;\n}\nexport default MyNewApplicationEntry;\n
3. Create the mount
// ./src/apps/my-new-application/my-new-application.mount.js\nimport addMount from \"../../core/addMount\";\nimport MyNewApplication from \"./my-new-application.entry\";\naddMount({ appName: \"my-new-application\", app: MyNewApplication });\n
4. Add a story for local development
// ./src/apps/my-new-application/my-new-application.dev.jsx\nimport React from \"react\";\nimport MyNewApplicationEntry from \"./my-new-application.entry\";\nimport MyNewApplication from \"./my-new-application\";\nexport default { title: \"Apps|My new application\" };\nexport function Entry() {\n// Testing the version that will be shipped.\nreturn <MyNewApplicationEntry />;\n}\nexport function WithoutData() {\n// Play around with the application itself without server side data.\nreturn <MyNewApplication />;\n}\n
5. Run the development environment
yarn dev\n
OR depending on your dev environment (docker or not)
sudo yarn dev\n
Voila! You browser should have opened and a storybook environment is ready for you to tinker around.
initial: Initial state for applications that require some sort of initialization, such as making a request to see if a material can be ordered, before rendering the order button. Errors in initialization can go directly to the failed state, or add custom states for communication different error conditions to the user. Should render either nothing or as a skeleton/spinner/message.
ready: The general \"ready state\". Applications that doesn't need initialization (a generic button for instance) can use ready as the initial state set in the useState call. This is basically the main waiting state.
processing: The application is taking some action. For buttons this will be the state used when the user has clicked the button and the application is waiting for reply from the back end. More advanced applications may use it while doing backend requests, if reflecting the processing in the UI is desired. Applications using optimistic feedback will render this state the same as the finished state.
failed: Processing failed. The application renders an error message.
finished: End state for one-shot actions. Communicates success to the user.
Applications can use additional states if desired, but prefer the above if appropriate.
"},{"location":"dpl-react/#style-your-application","title":"Style your application","text":"1. Create an application specific stylesheet
// ./src/apps/my-new-application/my-new-application.jsx\nimport React from \"react\";\nimport PropTypes from \"prop-types\";\nexport function MyNewApplication({ text }) {\nreturn (\n<h2 className='warm'>{text}</h2>\n);\n}\nMyNewApplication.defaultProps = {\ntext: \"The fastest man alive!\"\n};\nMyNewApplication.propTypes = {\ntext: PropTypes.string\n};\nexport default MyNewApplication;\n
3. Import the scss into your story
// ./src/apps/my-new-application/my-new-application.dev.jsx\nimport React from \"react\";\nimport MyNewApplicationEntry from \"./my-new-application.entry\";\nimport MyNewApplication from \"./my-new-application\";\nimport './my-new-application.scss';\nexport default { title: \"Apps|My new application\" };\nexport function Entry() {\n// Testing the version that will be shipped.\nreturn <MyNewApplicationEntry />;\n}\nexport function WithoutData() {\n// Play around with the application itself without server side data.\nreturn <MyNewApplication />;\n}\n
Cowabunga! You now got styling in your application
"},{"location":"dpl-react/#style-using-the-dpl-design-system","title":"Style using the DPL design system","text":"
This project includes styling created by its sister repository - the design system as a npm package.
By default the project should include a release of the design system matching the current state of the project.
To update the design system to the latest stable release of the design system run:
This command installs the latest released version of the package. Whenever a new version of the design system package is released, it is necessary to reinstall the package in this project using the same command to get the newest styling, because yarn adds a specific version number to the package name in package.json.
If you need to work with published but unreleased code from a specific branch of the design system, you can also use the branch name as the tag for the npm package, replacing all special characters with dashes (-).
Example: To use the latest styling from a branch in the design system called feature/availability-label, run:
If the component is simple enough to be a primitive you would use in multiple occasions it's called an 'atom'. Such as a button or a link. If it's more specific that that and to be used across apps we just call it a component. An example would be some type of media presented alongside a header and some text.
The process when creating an atom or a component is more or less similar, but some structural differences might be needed.
"},{"location":"dpl-react/#creating-an-atom","title":"Creating an atom","text":"1. Create the atom
// ./src/components/atoms/my-new-atom/my-new-atom.jsx\nimport React from \"react\";\nimport PropTypes from 'prop-types';\n/**\n * A simple button.\n *\n * @export\n * @param {object} props\n * @returns {ReactNode}\n */\nexport function MyNewAtom({ className, children }) {\nreturn <button className={`btn ${className}`}>{children}</button>;\n}\nMyNewAtom.propTypes = {\nclassName: PropTypes.string,\nchildren: PropTypes.node.isRequired\n}\nMyNewAtom.defaultProps = {\nclassName: \"\"\n}\nexport default MyNewAtom;\n
// ./src/components/atoms/my-new-atom/my-new-atom.dev.jsx\nimport React from \"react\";\nimport MyNewAtom from \"./my-new-atom\";\nexport default { title: \"Atoms|My new atom\" };\nexport function WithText() {\nreturn <MyNewAtom>Cick me!</MyNewAtom>;\n}\n
5. Import the atom into the applications or other components where you would want to use it
// ./src/apps/my-new-application/my-new-application.jsx\nimport React, {Fragment} from \"react\";\nimport PropTypes from \"prop-types\";\nimport MyNewAtom from \"../../components/atom/my-new-atom/my-new-atom\"\nexport function MyNewApplication({ text }) {\nreturn (\n<Fragment>\n<h2 className='warm'>{text}</h2>\n<MyNewAtom className='additional-class' />\n</Fragment>\n);\n}\nMyNewApplication.defaultProps = {\ntext: \"The fastest man alive!\"\n};\nMyNewApplication.propTypes = {\ntext: PropTypes.string\n};\nexport default MyNewApplication;\n
Finito! You now know how to share code across applications
"},{"location":"dpl-react/#creating-a-component","title":"Creating a component","text":"
Repeat all of the same steps as with an atom but place it in it's own directory inside components.
Such as ./src/components/my-new-component/my-new-component.jsx
"},{"location":"dpl-react/#editor-example-configuration","title":"Editor example configuration","text":"
If you use Code we provide some easy to use and nice defaults for this project. They are located in .vscode.example. Simply rename the directory from .vscode.example to .vscode and you are good to go. This overwrites your global user settings for this workspace and suggests som extensions you might want.
So let's say you wanted to make use of an application within an existing HTML page such as what might be generated serverside by platforms like Drupal, WordPress etc.
For this use case you should download the dist.zip package from the latest release of the project and unzip somewhere within the web root of your project. The package contains a set of artifacts needed to use one or more applications within an HTML page.
HTML Example A simple example of the required artifacts and how they are used looks like this:
<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n<meta charset=\"UTF-8\">\n<meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">\n<meta http-equiv=\"X-UA-Compatible\" content=\"ie=edge\">\n<title>Naive mount</title>\n<!-- Include CSS files to provide default styling -->\n<link rel=\"stylesheet\" href=\"/dist/components.css\">\n</head>\n<body>\n<b>Here be dragons!</b>\n<!-- Data attributes will be camelCased on the react side aka.\n props.errorText and props.text -->\n<div data-dpl-app='add-to-checklist' data-text=\"Chromatic dragon\"\ndata-error-text=\"Minor mistake\"></div>\n<div data-dpl-app='a-none-existing-app'></div>\n<!-- Load order og scripts is of importance here -->\n<script src=\"/dist/runtime.js\"></script>\n<script src=\"/dist/bundle.js\"></script>\n<script src=\"/dist/mount.js\"></script>\n<!-- After the necessary scripts you can start loading applications -->\n<script src=\"/dist/add-to-checklist.js\"></script>\n<script>\n// For making successful requests to the different services we need one or\n// more valid tokens.\nwindow.dplReact.setToken(\"user\",\"XXXXXXXXXXXXXXXXXXXXXX\");\nwindow.dplReact.setToken(\"library\",\"YYYYYYYYYYYYYYYYYYYYYY\");\n// If this function isn't called no apps will display.\n// An app will only be displayed if there is a container for it\n// and a corresponding application loaded.\nwindow.dplReact.mount(document);\n</script>\n</body>\n</html>\n
As a minimum you will need the runtime.js and bundle.js. For styling of atoms and components you will need to import components.css.
Each application also has its own JavaScript artifact and it might have a CSS artifact as well. Such as add-to-checklist.js and add-to-checklist.css.
To mount the application you need an HTML element with the correct data attribute.
<div data-dpl-app='add-to-checklist'></div>\n
The name of the data attribute should be data-dpl-app and the value should be the name of the application - the value of the appName parameter assigned in the application .mount.js file.
"},{"location":"dpl-react/#data-attributes-and-props","title":"Data attributes and props","text":"
As stated above, every application needs the corresponding data-dpl-app attribute to even be mounted and shown on the page. Additional data attributes can be passed if necessary. Examples would be contextual ids etc. Normally these would be passed in by the serverside platform e.g. Drupal, Wordpress etc.
<div data-dpl-app='add-to-checklist' data-id=\"870970-basis:54172613\"\ndata-error-text=\"A mistake was made\"></div>\n
The above data-id would be accessed as props.id and data-error-text as props.errorText in the entrypoint of an application.
Example
// ./src/apps/my-new-application/my-new-application.entry.jsx\nimport React from \"react\";\nimport PropTypes from \"prop-types\";\nimport MyNewApplication from './my-new-application.jsx';\nexport function MyNewApplicationEntry({ id }) {\nreturn (\n<MyNewApplication\n// 870970-basis:54172613\nid={id}\n/>\n}\nexport default MyNewApplicationEntry;\n
To fake this in our development environment we need to pass these same data attributes into our entrypoint.
Example
// ./src/apps/my-new-application/my-new-application.dev.jsx\nimport React from \"react\";\nimport MyNewApplicationEntry from \"./my-new-application.entry\";\nimport MyNewApplication from \"./my-new-application\";\nexport default { title: \"Apps|My new application\" };\nexport function Entry() {\n// Testing the version that will be shipped.\nreturn <MyNewApplicationEntry id=\"870970-basis:54172613\" />;\n}\nexport function WithoutData() {\n// Play around with the application itself without server side data.\nreturn <MyNewApplication />;\n}\n
"},{"location":"dpl-react/#extending-the-project","title":"Extending the project","text":"
If you want to extend this project - either by introducing new components or expand the functionality of the existing ones - and your changes can be implemented in a way that is valuable to users in general, please submit pull requests.
Even if that is not the case and you have special needs the infrastructure of the project should also be helpful to you.
In such a situation you should fork this project and extend it to your own needs by implementing new applications. New applications can reuse various levels of infrastructure provided by the project such as:
Integration with various webservices
User authentication and token management
Visual atoms or components
Visual representations of existing applications
Styling using SCSS
Test infrastructure
Application mounting
Once the customization is complete the result can be packaged for distribution by pushing the changes to the forked repository:
Changes pushed to the master branch of the forked repository will automatically update the latest release of the fork.
Tags pushed to the forked repository also will be published as new releases in the fork.
The result can be used in the same ways as the original project.
Campaigns are elements that are shown on the search result page above the search result list. There are three types of campaigns:
Full campaigns - containing an image and a some text.
Text-only campaigns - they don't show any images.
Image-only campaigns - they don't show any text.
However, they are only shown in case certain criteria are met. We check for this by contacting the dpl-cms API.
"},{"location":"dpl-react/campaigns/#how-campaign-setup-works-in-dpl-cms","title":"How campaign setup works in dpl-cms","text":"
Dpl-cms is a cms system based on Drupal, where the system administrators can set up campaigns they want to show to their users. Drupal also allows the cms system to act as an API endpoint that we then can contact from our apps.
The cms administrators can specify the content (image, text) and the visibility criteria for each campaign they create. The visibility criteria is based on search filter facets. Does that sound familiar? Yes, we use another API to get that very data in THIS project - in the search result app. The facets differ based on the search string the user uses for their search.
As an example, the dpl-cms admin could wish to show a Harry Potter related campaign to all the users whose search string retreives search facets which have \"Harry Potter\" as one of the most relevant subjects. Campaigns in dpl-cms can use triggers such as subject, main language, etc.
An example code snippet for retreiving a campaign from our react apps would then look something like this:
// Creating a state to store the campaign data in.\nconst [campaignData, setCampaignData] = useState<CampaignMatchPOST200 | null>(\nnull\n);\n// Retreiving facets for the campaign based on the search query and existing\n// filters.\nconst { facets: campaignFacets } = useGetFacets(q, filters);\n// Using the campaign hook generated by Orval from src/core/dpl-cms/dpl-cms.ts\n// in order to get the mutate function that lets us retreive campaigns.\nconst { mutate } = useCampaignMatchPOST();\n// Only fire the campaign data call if campaign facets or the mutate function\n// change their value.\nuseDeepCompareEffect(() => {\nif (campaignFacets) {\nmutate(\n{\ndata: campaignFacets as CampaignMatchPOSTBodyItem[]\n},\n{\nonSuccess: (campaign) => {\nsetCampaignData(campaign);\n},\nonError: () => {\n// Handle error.\n}\n}\n);\n}\n}, [campaignFacets, mutate]);\n
"},{"location":"dpl-react/campaigns/#showing-campaigns-in-dpl-react-when-in-development-mode","title":"Showing campaigns in dpl-react when in development mode","text":"
You first need to make sure to have a campaign set up in your locally running dpl-cms ( run this repo locally) Then, in order to see campaigns locally in dpl-react in development mode, you will most likely need a browser plugin such as Google Chrome's \"Allow CORS: Access-Control-Allow-Origin\" in order to bypass CORS policy for API data calls.
The following guidelines describe best practices for developing code for React components for the Danish Public Libraries CMS project. The guidelines should help achieve:
A stable, secure and high quality foundation for building and maintaining client-side JavaScript components for library websites
Consistency across multiple developers participating in the project
The best possible conditions for sharing components between library websites
The best possible conditions for the individual library website to customize configuration and appearance
Contributions to the DDB React project will be reviewed by members of the Core team. These guidelines should inform contributors about what to expect in such a review. If a review comment cannot be traced back to one of these guidelines it indicates that the guidelines should be updated to ensure transparency.
The project follows the Airbnb JavaScript Style Guide and Airbnb React/JSX Style Guide. This choice is based on multiple factors:
Historically the community of developers working with DDB has ties to the Drupal project. Drupal has adopted the Airbnb JavaScript Style Guide so this choice should ensure consistency between the two projects.
Airbnb's standard is one of the best known and most used in the JavaScript ccoding standard landscape.
Airbnb\u2019s standard is both comprehensive and well documented.
Airbnb\u2019s standards cover both JavaScript in general React/JSX specifically. This avoids potential conflicts between multiple standards.
The following lists significant areas where the project either intentionally expands or deviates from the official standards or areas which developers should be especially aware of.
The default language for all code and comments is English.
Components must be compatible with the latest stable version of the following browsers:
Desktop
Microsoft Edge
Google Chrome
Safari
Firefox
Mobile
Google Chrome
Safari
Firefox
Samsung Browser
"},{"location":"dpl-react/code_guidelines/#javascript","title":"JavaScript","text":""},{"location":"dpl-react/code_guidelines/#named-functions-vs-anonymous-arrow-functions","title":"Named functions vs. anonymous arrow functions","text":"
AirBnB's only guideline towards this is that anonymous arrow function nation is preferred over the normal anonymous function notation.
This project sticks to the above guideline as well. If we need to pass a function as part of a callback or in a promise chain and we on top of that need to pass some contextual variables that are not passed implicitly from either the callback or the previous link in the promise chain we want to make use of an anonymous arrow function as our default.
This comes with the build in disclaimer that if an anonymous function isn't required the implementer should heavily consider moving the logic out into its own named function expression.
The named function is primarily desired due to it's easier to debug nature in stacktraces.
Configuration must be passed as props for components. This allows the host system to modify how a component works when it is inserted.
All components should be provided with skeleton screens. This ensures that the user interface reflects the final state even when data is loaded asynchronously. This reduces load time frustration.
Components should be optimistic. Unless we have reason to believe that an operation may fail we should provide fast response to users.
All interface text must be implemented as props for components. This allows the host system to provide a suitable translation/version when using the component.
All classes must have the dpl- prefix. This makes them distinguishable from classes provided by the host system.
Class names should follow the Block-Element-Modifier architecture.
Components must use and/or provide a default style sheet which at least provides a minimum of styling showing the purpose of the component.
Elements must be provided with meaningful classes even though they are not targeted by the default style sheet. This helps host systems provide additional styling of the components. Consider how the component consists of blocks and elements with modifiers and how these can be nested within each other.
Components must use SCSS for styling. The project uses PostCSS and PostCSS-SCSS within Webpack for processing.
Components must provide configuration to set a top headline level for the component. This helps provide a proper document outline to ensure the accessibility of the system.
"},{"location":"dpl-react/code_guidelines/#third-party-code","title":"Third party code","text":"
The project uses Yarn as a package manager to handle code which is developed outside the project repository. Such code must not be committed to the Core project repository.
When specifying third party package versions the project follows these guidelines:
Use the ^ next significant release operator for packages which follow semantic versioning.
The version specified must be the latest known working and secure version. We do not want accidental downgrades.
We want to allow easy updates to all working releases within the same major version.
Packages which are not intended to be executed at runtime in the production environment should be marked as development dependencies.
Components must reuse existing dependencies in the project before adding new ones which provide similar functionality. This ensures consistency and avoids unnecessary increases in the package size of the project.
The reasoning behind the choice of key dependencies have been documented in the architecture directory.
"},{"location":"dpl-react/code_guidelines/#altering-third-party-code","title":"Altering third party code","text":"
The project uses patches rather than forks to modify third party packages. This makes maintenance of modified packages easier and avoids a collection of forked repositories within the project.
Use an appropriate method for the corresponding package manager for managing the patch.
Patches should be external by default. In rare cases it may be needed to commit them as a part of the project.
When providing a patch you must document the origin of the patch e.g. through an url in a commit comment or preferably in the package manager configuration for the project.
Code comments which describe what an implementation does should only be used for complex implementations usually consisting of multiple loops, conditional statements etc.
Inline code comments should focus on why an unusual implementation has been implemented the way it is. This may include references to such things as business requirements, odd system behavior or browser inconsistencies.
Commit messages in the version control system help all developers understand the current state of the code base, how it has evolved and the context of each change. This is especially important for a project which is expected to have a long lifetime.
Commit messages must follow these guidelines:
Each line must not be more than 72 characters long
The first line of your commit message (the subject) must contain a short summary of the change. The subject should be kept around 50 characters long.
The subject must be followed by a blank line
Subsequent lines (the body) should explain what you have changed and why the change is necessary. This provides context for other developers who have not been part of the development process. The larger the change the more description in the body is expected.
If the commit is a result of an issue in a public issue tracker, platform.dandigbib.dk, then the subject must start with the issue number followed by a colon (:). If the commit is a result of a private issue tracker then the issue id must be kept in the commit body.
When creating a pull request the pull request description should not contain any information that is not already available in the commit messages.
Developers are encouraged to read How to Write a Git Commit Message by Chris Beams.
The project aims to automate compliance checks as much as possible using static code analysis tools. This should make it easier for developers to check contributions before submitting them for review and thus make the review process easier.
The following tools pay a key part here:
Eslint with the following rulesets and plugins:
Airbnb JavaScript Style Guide
Airbnb React/JSX Style Guide
Prettier
Cypress
Stylelint with the following rulesets and plugins
Recommended SCSS
Prettier
BEM support
In general all tools must be able to run locally. This allows developers to get quick feedback on their work.
Tools which provide automated fixes are preferred. This reduces the burden of keeping code compliant for developers.
Code which is to be exempt from these standards must be marked accordingly in the codebase - usually through inline comments (Eslint, Stylelint). This must also include a human readable reasoning. This ensures that deviations do not affect future analysis and the project should always pass through static analysis.
If there are discrepancies between the automated checks and the standards defined here then developers are encouraged to point this out so the automated checks or these standards can be updated accordingly.
After quite a lot of bad experiences with unstable tests and reading both the official documentation and articles about the best practices we have ended up with a recommendation of how to write the tests.
According to this article it is important to distinguish between commands and assertions. Commands are used in the beginning of a statement and yields a chainable element that can be followed by one or more assertions in the end.
So first we target an element. Next we can make one or more assertions on the element.
We have created some helper commands for targeting an element: getBySel, getBySelLike and getBySelStartEnd. They look for elements as advised by the Selecting Elements section from the Cypress documentation about best practices.
Example of a statement:
// Targeting.\ncy.getBySel(\"reservation-success-title-text\")\n// Assertion.\n.should(\"be.visible\")\n// Another assertion.\n.and(\"contain\", \"Material is available and reserved for you!\");\n
"},{"location":"dpl-react/code_guidelines/#writing-unit-tests","title":"Writing Unit Tests","text":"
We are using Vitest as framework for running unit tests. By using that we can test functions (and therefore also hooks) and classes.
"},{"location":"dpl-react/code_guidelines/#where-do-i-place-my-tests","title":"Where do I place my tests?","text":"
They have to be placed in src/tests/unit.
Or they can also be placed next to the code at the end of a file as described here.
export const sum = (...numbers: number[]) =>\nnumbers.reduce((total, number) => total + number, 0);\nif (import.meta.vitest) {\nconst { describe, expect, it } = import.meta.vitest;\ndescribe(\"sum\", () => {\nit(\"should sum numbers\", () => {\nexpect(sum(1, 2, 3)).toBe(6);\n});\n});\n}\n
In that way it helps us to test and mock unexported functions.
Error handling is something that is done on multiple levels: Eg.: Form validation, network/fetch error handling, runtime errors. You could also argue that the fact that the codebase is making use of typescript and code generation from the various http services (graphql/REST) belongs to the same idiom of error handling in order to make the applications more robust.
Error boundary was introduced in React 16 and makes it possible to implement a \"catch all\" feature catching \"uncatched\" errors and replacing the application with a component to the users that something went wrong. It is meant ato be a way of having a safety net and always be able to tell the end user that something went wrong. The apps are being wrapped in the error boundary handling which makes it possible to catch thrown errors at runtime.
"},{"location":"dpl-react/error_handling/#fetch-and-error-boundary","title":"Fetch and Error Boundary","text":"
Async operations and therby also fetch are not being handled out of the box by the React Error Boundary. But fortunately react-query, which is being used both by the REST services (Orval) and graphql (Graphql Code Generator), has a way of addressing the problem. The QueryClient can be configured to trigger the Error Boundary system if an error is thrown. So that is what we are doing.
Two different types of error classes have been made in order to handle errors in the fetchers: http errors and fetcher errors.
Http errors are the ones originating from http errors and have a status code attached.
Fetcher errors are whatever else bad that could apart from http errors. Eg. JSON parsing gone wrong.
Both types of errors comes in two variants: \"normal\" and \"critical\". The idea is that only critical errors will trigger an Error Boundary.
For instance if you look at the DBC Gateway fetcher it throws a DbcGateWayHttpError in case of a http error occurs. DbcGateWayHttpError extends the FetcherCriticalHttpError which makes sure to trigger the Error Boundary system.
The reason why *.name is set in the errors is to make it clear which error was thrown. If we don't do that the name of the parent class is used in the error log. And then it is more difficult to trace where the error originated from.
The initial implementation is handling http errors on a coarse level: If response.ok is false then throw an error. If the error is critical the error boundary is triggered. In future version you could could take more things into consideration regarding the error:
Should all status codes trigger an error?
Should we have different types of error level depending on request and/or http method?
The FBS client adapter is autogenerated base the swagger 1.2 json files from FBS. But Orval requires that we use swagger version 2.0 and the adapter has some changes in paths and parameters. So some conversion is need at the time of this writing.
FBS documentation can be found here.
All this will hopefully be changed when/or if the adapter comes with its own specifications.
A repository dpl-fbs-adapter-tool tool build around PHP and NodeJS can translate the FBS specifikation into one usable for Orval client generator. It also filters out all the FBS calls not need by the DPL project.
The tool uses go-task to simply the execution of the command.
In order to improve both UX and the performance score you can choose to use skeleton screens in situation where you need to fill the interface with data from a requests to an external service.
The skeleton screens are being showed instantly in order to deliver some content to the end user fast while loading data. When the data is arriving the skeleton screens are being replaced with the real data.
"},{"location":"dpl-react/skeleton_screens/#how-to-use-it","title":"How to use it","text":"
The skeleton screens are rendered with help from the skeleton-screen-css library. By using ssc classes you can easily compose screens that simulate the look of a \"real\" rendering with real data.
In this example we are showing a search result item as a skeleton screen. The skeleton screen consists of a cover, a headline and two lines of text. In this case we wanted to maintain the styling of the .card-list-item wrapper. And show the skeleton screen elements by using ssc classes.
The main purpose of the functionality is to be able to access strings defined at app level inside of sub components without passing them all the way down via props. You can read more about the decision and considerations here.
"},{"location":"dpl-react/ui_text_handling/#how-to-use-it","title":"How to use it","text":"
In order to use the system the component that has the text props needs to be wrapped with the withText high order function. The texts can hereafter be accessed by using the useText hook.
It is also possible to use placeholders in the text strings. They can be handy when you want dynamic values embedded in the text.
A classic example is the welcome message to the authenticated user. Let's say you have a text with the key: welcomeMessageText. The value from the data prop is: Welcome @username, today is @date. You would the need to reference it like this:
Sometimes you want two versions of a text be shown depending on if you have one or multiple items being referenced in the text.
That can be accommodated by using the plural text definition.
Let's say that an authenticated user has a list of unread messages in an inbox. You could have a text key called: inboxStatusText. The value from the data prop is:
{\"type\":\"plural\",\"text\":[\"You have 1 message in the inbox\",\n\"You have @count messages in the inbox\"]}.\n
You would then need to reference it like this:
import * as React from \"react\";\nimport { useText } from \"../../core/utils/text\";\n\nconst InboxStatus: React.FC = () => {\n const t = useText();\n const user = getUser();\n const inboxMessageCount = getUserInboxMessageCount(user);\n\n const status = t(\"inboxStatusText\", {\n count: inboxMessageCount,\n placeholders: {\n \"@count\": inboxMessageCount\n }\n });\n\n return (\n <div>{status}</div>\n // If count == 1 the texts will be:\n // \"You have 1 message in the inbox\"\n\n // If count == 5 the texts will be:\n // \"You have 5 messages in the inbox\"\n );\n};\nexport default InboxStatus;\n
We are not able to persist and execute a users intentions across page loads. This is expressed through a number of issues. The main agitator is maintaining intent whenever a user tries to do anything that requires them to be authenticated. In these situations they get redirected off the page and after a successful login they get redirected back to the origin page but without the intended action fulfilled.
One example is the AddToChecklist functionality. Whenever a user wants to add a material to their checklist they click the \"Tilf\u00f8j til huskelist\" button next to the material presentation. They then get redirected to Adgangsplatformen. After a successful login they get redirected back to the material page but the material has not been added to their checklist.
After an intent has been stated we want the intention to be executed even though a page reload comes in the way.
We move to implementing what we define as an explicit intention before the actual action is tried for executing.
User clicks the button.
Intent state is generated and committed.
Implementation checks if the intended action meets all the requirements. In this case, being logged in and having the necessary payload.
If the intention meets all requirements we then fire the addToChecklist action.
Material is added to the users checklist.
The difference between the two might seem superfluous but the important distinction to make is that with our current implementation we are not able to serialize and persist the actions as the application state across page loads. By defining intent explicitly we are able to serialize it and persist it between page loads.
This resolves in the implementation being able to rehydrate the persisted state, look at the persisted intentions and have the individual application implementations decide what to do with the intention.
A mock implementation of the case by case business logic looks as follows.
const initialStore = {\n authenticated: false,\n intent: {\n status: '',\n payload: {}\n }\n}\n\nconst fulfillAction = store.authenticated && \n (store.intent.status === 'pending' || store.intent.status === 'tried')\nconst getRequirements = !store.authenticated && store.intent.status === 'pending'\nconst abandonIntention = !store.authenticated && store.intent.status === 'tried'\n\nfunction AddToChecklist ({ materialId, store }) {\n useEffect(() => {\n if (fulfillAction) {\n // We fire the actual functionality required to add a material to the \n // checklist and we remove the intention as a result of it being\n // fulfilled.\n addToChecklistAction(store, materialId)\n } else if (getRequirements) {\n // Before we redirect we set the status to be \"tried\".\n redirectToLogin(store)\n } else if (abandonIntention) {\n // We abandon the intent so that we won't have an infinite loop of retries\n // at every page load.\n abandonAddToChecklistIntention(store)\n }\n }, [materialId, store.intent.status])\n return (\n <button\n onClick={() => {\n // We do not fire the actual logic that is required to add a material to\n // the checklist. Instead we add the intention of said action to the\n // store. This is when we would set the status of the intent to pending\n // and provide the payload.\n addToChecklistIntention(store, materialId)\n }}\n >\n Tilf\u00f8j til huskeliste\n </button>\n )\n}\n
We utilize session storage to persist the state on the client due to it's short lived nature and porous features.
We choose Redux as the framework to implemenent this. Redux is a blessed choice in this instance. It has widespread use, an approachable design and is well-documented. The best way to go about a current Redux implementation as of now is @reduxjs/toolkit. Redux is a sufficiently advanced framework to support other uses of application state and even co-locating shared state between applications.
For our persistence concerns we want to use the most commonly used tool for that, redux-persist. There are some implementation details to take into consideration when integrating the two.
"},{"location":"dpl-react/architecture/adr-001-rehydration/#alternatives-considered","title":"Alternatives considered","text":""},{"location":"dpl-react/architecture/adr-001-rehydration/#persistence-in-url","title":"Persistence in URL","text":"
We could persist the intentions in the URL that is delivered back to the client after a page reload. This would still imply some of the architectural decisions described in Decision in regards to having an \"intent\" state, but some of the different status flags etc. would not be needed since state is virtually shared across page loads in the url. However this simpler solution cannot handle more complex situations than what can be described in the URL feasibly.
React offers useContext() for state management as an alternative to Redux.
We prefer Redux as it provides a more complete environment when working with state management. There is already a community of established practices and libraries which integrate with Redux. One example of this is our need to persist actions. When using Redux we can handle this with redux-persist. With useContext() we would have to roll our own implementation.
Some of the disadvantages of using Redux e.g. the amount of required boilerplate code are addressed by using @reduxjs/toolkit.
We are able to support most if not all of our rehydration cases and therefore pick up user flow from where we left it.
Heavy degree of complexity is added to tasks that requires an intention instead of a simple action.
Saving the immediate state to the session storage makes for yet another place to \"clear cache\".
"},{"location":"dpl-react/architecture/adr-002-ui-text-handling/","title":"Architecture Decision Record: UI Text Handling","text":""},{"location":"dpl-react/architecture/adr-002-ui-text-handling/#context","title":"Context","text":"
It has been decided that app context/settings should be passed from the server side rendered mount points via data props. One type of settings is text strings that is defined by the system/host rendering the mount points. Since we are going to have quite some levels of nested components it would be nice to have a way to extract the string without having to define them all the way down the tree.
A solution has been made that extracts the props holding the strings and puts them in the Redux store under the index: text at the app entry level. That is done with the help of the withText() High Order Component. The solution of having the strings in redux enables us to fetch the strings at any point in the tree. A hook called: useText() makes it simple to request a certain string inside a given component.
One major alternative would be not doing it and pass down the props. But that leaves us with text props all the way down the tree which we would like to avoid. Some translation libraries has been investigated but that would in most cases give us a lot of tools and complexity that is not needed in order to solve the relatively simple task.
Since we omit the text props down the tree it leaves us with fewer props and a cleaner component setup. Although some \"magic\" has been introduced with text prop matching and storage in redux, it is outweighed by the simplicity of the HOC wrapper and useText hook.
As a part of the project, we need to implement a dropdown autosuggest component. This component needs to be complient with modern website accessibility rules:
The component dropdown items are accessible by keyboard by using arrow keys - down and up.
The items visibly change when they are in current focus.
Items are selectable using the keyboard.
Apart from these accessibility features, the component needs to follow a somewhat complex design described in this Figma file. As visible in the design this autosuggest dropdown doesn't consist only of single line text items, but also contains suggestions for specific works - utilizing more complex suggestion items with cover pictures, and release years.
Our research on the most popular and supported javascript libraries heavily leans on this specific article. In combination with our needs described above in the context section, but also considering what it would mean to build this component from scratch without any libraries, the decision taken favored a library called Downshift.
This library is the second most popular JS library used to handle autsuggest dropdowns, multiselects, and select dropdowns with a large following and continuous support. Out of the box, it follows the ARIA principles, and handles problems that we would normally have to solve ourselves (e.g. opening and closing of the dropdown/which item is currently in focus/etc.).
Another reason why we choose Downshift over its peer libraries is the amount of flexibility that it provides. In our eyes, this is a strong quality of the library that allows us to also implement more complex suggestion dropdown items.
"},{"location":"dpl-react/architecture/adr-003-downshift/#alternatives-considered","title":"Alternatives considered","text":""},{"location":"dpl-react/architecture/adr-003-downshift/#building-the-autosuggest-dropdown-not-using-javascript-libraries","title":"Building the autosuggest dropdown not using javascript libraries","text":"
In this case, we would have to handle accessibility and state management of the component with our own custom solutition.
Staying informed about how the size of the JavaScript we require browsers to download to use the project plays an important part in ensuring a performant solution.
We currently have no awareness of this in this project and the result surfaces down the line when the project is integrated with the CMS, which is tested with Lighthouse.
To address this we want a solution that will help us monitor the changes to the size of the bundle we ship for each PR.
Bundlewatch and its ancestor, bundlesize combine a CLI tool and a web app to provide bundle analysis and feedback on GitHub as status checks.
These solutions no longer seem to be actively maintained. There are several bugs that would affect us and fixes remain unmerged. The project relies on a custom secret instead of GITHUB_TOKEN. This makes supporting our fork-based development workflow harder.
This is a GitHub Action which can be used in configurations where statistics for two bundles are compared e.g. for the base and head of a pull request. This results in a table of changes displayed as a comment in the pull request. This is managed using GITHUB_TOKEN.
The decision of obtaining react-use as a part of the project originated from the problem that arose from having an useEffect hook with an object as a dependency.
useEffect does not support comparison of objects or arrays and we needed a method for comparing such natives.
We could have used our own implementation of the problem. But since it is a common problem we might as well use a community backed solution. And react-use gives us a wealth of other tools.
We can now use useDeepCompareEffect instead of useEffect in cases where we have arrays or objects amomg the dependencies. And we can make use of all the other utility hooks that the package provides.
"},{"location":"dpl-react/architecture/adr-006-unit-tests/","title":"Architecture Decision Record: Unit Tests","text":""},{"location":"dpl-react/architecture/adr-006-unit-tests/#context","title":"Context","text":"
The code base is growing and so does the number of functions and custom hooks.
While we have a good coverage in our UI tests from Cypress we are lacking something to tests the inner workings of the applications.
With unit tests added we can test bits of functionality that is shared between different areas of the application and make sure that we get the expected output giving different variations of input.
We decided to go for Vitest which is an easy to use and very fast unit testing tool.
It has more or less the same capabilities as Jest which is another popular testing framework which is similar.
Vitest is framework agnostic so in order to make it possible to test hooks we found @testing-library/react-hooks that works in conjunction with Vitest.
We could have used Jest. But trying that we experienced major problems with having both Jest and Cypress in the same codebase. They have colliding test function names and Typescript could not figure it out.
There is probably a solution but at the same time we got Vitest recommended. It seemed very fast and just as capable as Jest. And we did not have the colliding issues of shared function/object names.
We now have unit test as a part of the mix which brings more stability and certainty that the individual pieces of the application work.
"}]}
\ No newline at end of file
diff --git a/sitemap.xml b/sitemap.xml
new file mode 100644
index 00000000..0f8724ef
--- /dev/null
+++ b/sitemap.xml
@@ -0,0 +1,3 @@
+
+
+
\ No newline at end of file
diff --git a/sitemap.xml.gz b/sitemap.xml.gz
new file mode 100644
index 00000000..ffb5d56a
Binary files /dev/null and b/sitemap.xml.gz differ