diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 0000000..e69de29 diff --git a/404.html b/404.html new file mode 100644 index 0000000..3fd9d91 --- /dev/null +++ b/404.html @@ -0,0 +1,3001 @@ + + + +
+ + + + + + + + + + + + + + +We use the RESTful Web Services and OpenAPI REST Drupal modules +to expose endpoints from Drupal as an API to be consumed by external parties.
+Drupal\rest\Plugin\ResourceBase
and annotating it with @RestResource
uri_paths
, route_parameters
and responses
in the annotation as
+ detailed as possible to create a strong specification.drush pm-enable restui
dpl_login_user_token
authentication provider for all resources which will
+ be used by the frontend this will provide a library or user token by default./openapi/rest?_format=json
to
+ ensure looks as intendedtask ci:openapi:validate
to validate the updated OpenAPI specificationtask ci:openapi:download
to download the updated OpenAPI specificationdrush pm-uninstall restui
drush config-export
openapi.json
See the new ADR in adr-001b-configuration-management.md
+Configuration management for DPL CMS is a complex issue. The complexity stems +from different types of DPL CMS sites.
+There are two approaches to the problem:
+A solution to configuration management must live up to the following test:
+drush site-install --existing-config -y
drush config-import -y
and see that the change is rolled back and the
+ configuration value is back to default. This shows that Core configuration
+ will remain managed by the configuration system.drush config-import -y
to see that no configuration is imported. This
+ shows that local configuration which can be managed by Editor libraries will
+ be left unchanged.drush config-import -y
to see that the module is not disabled and the
+ configuration remains unchanged. This shows that local configuration in the
+ form of new modules added by Webmaster libraries will be left unchanged.We use the Configuration Ignore module +to manage configuration.
+The module maintains a list of patterns for configuration which will be ignored +during the configuration import process. This allows us to avoid updating local +configuration.
+By adding the wildcard *
at the top of this list we choose an approach where
+all configuration is considered local by default.
Core configuration which should not be ignored can then be added to subsequent
+lines with the ~
which prefix. On a site these configuration entries will be
+updated to match what is in the core configuration.
Config Ignore also has the option of ignoring specific values within settings.
+This is relevant for settings such as system.site
where we consider the site
+name local configuration but 404 page paths core configuration.
The Deconfig module allows developers +to mark configuration entries as exempt from import/export. This would allow us +to exempt configuration which can be managed by the library.
+This does not handle configuration coming from new modules uploaded on webmaster +sites. Since we cannot know which configuration entities such modules will +provide and Deconfig has no concept of wildcards we cannot exempt the +configuration from these modules. Their configuration will be removed again at +deployment.
+We could use partial imports through drush config-import --partial
to not
+remove configuration which is not present in the configuration filesystem.
We prefer Config Ignore as it provides a single solution to handle the entire +problem space.
+The Config Ignore Auto module +extends the Config Ignore module. Config Ignore Auto registers configuration +changes and adds them to an ignore list. This way they are not overridden on +future deployments.
+The module is based on the assumption that if an user has access to a +configuration form they should also be allowed to modify that configuration for +their site.
+This turns the approach from Config Ignore on its head. All configuration is now +considered core until it is changed on the individual site.
+We prefer Config Ignore as it only has local configuration which may vary +between sites. With Config Ignore Auto we would have local configuration and +the configuration of Config Ignore Auto.
+Config Ignore Auto also have special handling of the core.extensions
+configuration which manages the set of installed modules. Since webmaster sites
+can have additional modules installed we would need workarounds to handle these.
The Config Split module allows +developers to split configurations into multiple groups called settings.
+This would allow us to map the different types of configuration to different +settings.
+We have not been able to configure this module in a meaningful way which also +passed the provided test.
+drush config-export
+ and have the appropriate configuration not ignored.core.extension
is ignored Core developers will have to explicitly
+ enable and uninstall modules through code as a part of the development
+ process.Configuration management for DPL CMS is a complex issue. The complexity stems +from different types of DPL CMS sites.
+There are two approaches to the problem:
+A solution to configuration management must live up to the following test:
+drush site-install --existing-config -y
drush config-import -y
and see that the change is rolled back and the
+ configuration value is back to default. This shows that Core configuration
+ will remain managed by the configuration system.drush config-import -y
to see that no configuration is imported. This
+ shows that local configuration which can be managed by Editor libraries will
+ be left unchanged.drush config-import -y
to see that the module is not disabled and the
+ configuration remains unchanged. This shows that local configuration in the
+ form of new modules added by Webmaster libraries will be left unchanged.We use the +Configuration Ignore module +and the Config Ignore Auto module +to manage configuration.
+The base module maintains a list of patterns for configuration which will be +ignored during the configuration import process. This allows us to avoid +updating local configuration.
+Here, we can add some of the settings files that we already know needs to be +ignored and admin-respected. +But in reality, we don't need to do this manually, because of the second module:
+Config Ignore Auto is only enabled on non-development sites.
+It works by treating any settings that are updated (site settings, module
+settings etc.) as to be ignored.
+These settings will NOT be overriden on next deploy by drush config-import
.
The consequences of using this setup is
+1) We need to ignore core.extension.yml
, for administrators to manage modules
+ This means that we need to enable/disable new modules using code.
+ See dpl_base.install
for how to do this, through Drupal update hooks.
+2) If a faulty permission has been added, or if a decision has been made to
+ remove an existing permission, there might be config that we dont want to
+ ignore, that is ignored on some libraries.
This means we'll first have to detect which libraries have overriden config
+
+```bash
+ drush config:get config_ignore_auto.settings ignored_config_entities
+ --format json
+```
+
+and then either decide to override it, or migrate the existing.
+
3) A last, and final consequence, is that we need to treat permissions more + strictly that we do now.
+An example is `adminster site settings` also both allows stuff we want to
+ignore (site name), but also things we don't want to ignore (404 node ID).
+
The Deconfig module allows developers +to mark configuration entries as exempt from import/export. This would allow us +to exempt configuration which can be managed by the library.
+This does not handle configuration coming from new modules uploaded on webmaster +sites. Since we cannot know which configuration entities such modules will +provide and Deconfig has no concept of wildcards we cannot exempt the +configuration from these modules. Their configuration will be removed again at +deployment.
+We could use partial imports through drush config-import --partial
to not
+remove configuration which is not present in the configuration filesystem.
We prefer Config Ignore as it provides a single solution to handle the entire +problem space.
+The Config Split module allows +developers to split configurations into multiple groups called settings.
+This would allow us to map the different types of configuration to different +settings.
+We have not been able to configure this module in a meaningful way which also +passed the provided test.
+drush config-export
+ and have the appropriate configuration not ignored.core.extension
is ignored Core developers will have to explicitly
+ enable and uninstall modules through code as a part of the development
+ process.There are different types of users that are interacticting with the CMS system:
+We need to be able to handle that both type of users can be authenticated and +authorized in the scope of permissions that are tied to the user type.
+We had some discussions wether the Adgangsplatform users should be tied to a +Drupal user or not. As we saw it we had two options when a user logs in:
+We ended up with desicion no. 2 mentioned above. So we create a Drupal user upon +login if it is not existing already.
+We use the OpeOpenID Connect / OAuth client module +to manage patron authentication and authorization. And we have developed a +plugin for the module called: Adgangsplatformen which connects the external +oauth service with dpl-cms.
+Editors and administrators a.k.a normal Drupal users and does not require +additional handling.
+Instead of creating a new user for every single user logging in via +Adgangsplatformen you could consider having just one Drupal user for all the +external users. That would get rid of the UUID -> Drupal user id mapping that +has been implemented as it is now. And it would prevent creation of a lot +of users. The decision depends on if it is necessary to distinguish between the +different users on a server side level.
+ + + + + + + + + + + + + +The DPL React components needs to be integrated and available for rendering in +Drupal. The components are depending on a library token and an access token +being set in javascript.
+We decided to download the components with composer and integrate them as Drupal +libraries.
+As described in adr-002-user-handling +we are setting an access token in the user session when a user has been through +a succesful login at Adgangsplatformen.
+We decided that the library token is fetched by a cron job on a regular basis
+and saved in a KeyValueExpirable
store which automatically expires the token
+when it is outdated.
The library token and the access token are set in javascript on the endpoint:
+/dpl-react/user.js
. By loading the script asynchronically when mounting the
+components i javascript we are able to complete the rendering.
The general caching strategy is defined in another document and this focused on +describing the caching strategy of DPL react and other js resources.
+We need to have a caching strategy that makes sure that:
+We have created a purger in the Drupal Varnish/Purge setup that is able to purge
+everything. The purger is being used in the deploy routine by the command:
+drush cache:rebuild-external -y
PURGE
request we found out, by studing the vcl of Lagoon, that the PURGE
+ request actually is being translated into a BAN
on req.url
.DPL CMS integrates with a range of other business systems through APIs. These +APIs are called both clientside (from Browsers) and serverside (from +Drupal/PHP).
+Historically these systems have provided setups accessible from automated +testing environments. Two factors make this approach problematic going forward:
+To address these problems and help achieve a goal of a high degree of test +coverage the project needs a way to decouple from these external APIs during +testing.
+We use WireMock to mock API calls. Wiremock provides +the following feature relevant to the project:
+wiremock-php
+ client library to instrument WireMock from PHP code. We modernized the
+ behat-wiremock-extension
+ to instrument with Behat tests which we use for integration testing.Software for mocking API requests generally provide two approaches:
+Generally record/replay makes it easy to setup a lot of mock data quickly. +However, it can be hard to maintain these records as it is not obvious what part +of the data is important for the test and the relationship between the +individual tests and the corresponding data is hard to determine.
+Consequently, this project prefers instrumentation.
+There are many other tools which provide features similar to Wiremock. These +include:
+wiremock-php
and
+ behat-wiremock-extension
libraryInstrumentation of Wiremock with PHP is made obsolete with the migration from +Behat to Cypress.
+ + + + + + + + + + + + + +DPL CMS provides HTTP end points which are consumed by the React components. We +want to document these in an established structured format.
+Documenting endpoints in a established structured format allows us to use tools +to generate client code for these end points. This makes consumption easier and +is a practice which is already used with other +services +in the React components.
+Currently these end points expose business logic tied to configuration in the +CMS. There might be a future where we also need to expose editorial content +through APIs.
+We use the RESTful Web Services Drupal module +to expose an API from DPL CMS and document the API using the OpenAPI 2.0/Swagger +2.0 specification as supported by the +OpenAPI and OpenAPI REST +Drupal modules.
+This is a technically manageable, standards compliant and performant solution +which supports our initial use cases and can be expanded to support future +needs.
+There are two other approaches to working with APIs and specifications for +Drupal:
+The tagging system we have (tags & categories) considers content as 'islands': +Two peices of content may be tagged with the same, but they do not know about +each other.
+DDF however needs a way to structure content hierarchies. +After a lot of discussion, we reached the conclusion that this can be +materialized through the breadcrumb - the breadcrumb is basically the frontend +version of the content structure tree that editors will create and manage.
+Because of this, the breadcrumb is "static" and not "dynamic" - e.g., on some +sites, the breadcrumb is built dynamically, based on the route that the user +takes through the site, but in this case, the whole structure is built by +the editors.
+However, some content is considered "flat islands" - e.g. articles and events +should not know anything about each other, but still be categorized.
+Either way, the breadcrumb also defines the URL alias.
+There are two types of breadcrumbs:
+All of this is managed by dpl_breadcrumb
module.
We tried using menus instead of taxonomy, based on experience from another +project, but it caused too much confusion and a general poor admin experience. +More info about +that in the ticket comments:
+A functional breadcrumb, that is very hard to replace/migrate if we choose a +different direction.
+ + + + + + + + + + + + + +DPL CMS employs functional testing to ensure the functional integrity of the +project.
+This is currently implemented using Behat +which allows developers to instrument a browser navigating through different use +cases using Gherkin, a +business readable, domain specific language. Behat is used within the project +based on experience using it from the previous generation of DPL CMS.
+Several factors have caused us to reevaluate that decision:
+We choose to replace Behat with Cypress for functional testing.
+There are other prominent tools which can be used for browser based functional +testing:
+DPL CMS is only intended to integrate with one external system: +Adgangsplatformen. This integration is necessary to obtain patron and library +tokens needed for authentication with other business systems. All these +integrations should occur in the browser through React components.
+The purpose of this is to avoid having data passing through the CMS as an +intermediary. This way the CMS avoids storing or transmitting sensitive data. +It may also improve performance.
+In some situations it may be beneficiary to let the CMS access external systems +to provide a better experience for business users e.g. by displaying options +with understandable names instead of technical ids or validating data before it +reaches end users.
+We choose to allow CMS to access external systems server-side using PHP. +This must be done on behalf of the library - never the patron.
+dpl_library_token.handler
service.The current translation system for UI strings in DPL CMS is based solely on code
+deployment of .po
files.
However DPL CMS is expected to be deployed in about 100 instances just to cover +the Danish Public Library institutions. Making small changes to the UI texts in +the codebase would require a new deployment for each of the instances.
+Requiring code changes to update translations also makes it difficult for
+non-technical participants to manage the process themselves. They have to find a
+suitable tool to edit .po
files and then pass the updated files to a
+developer.
This process could be optimized if:
+We keep using GitHub as a central source for translation files.
+We configure Drupal to consume translations from GitHub. The Drupal translation +system already supports runtime updates and consuming translations from a +remote source.
+We use POEditor to perform translations. POEditor is a
+translation management tool that supports .po
files and integrates with
+GitHub. To detect new UI strings a GitHub Actions workflow scans the codebase
+for new strings and notifies POEditor. Here they can be translated by
+non-technical users. POEditor supports committing translations back to GitHub
+where they can be consumed by DPL CMS instances.
This approach has a number of benefits apart from addressing the original +issue:
+.po
files.A consequence of this approach is that developers have to write code that
+supports scanning. This is partly supported by the Drupal Code Standards. To
+support contexts developers also have to include these as a part of the t()
+function call e.g.
// Good
+$this->t('A string to be translated', [], ['context' => 'The context']);
+$this->t('Another string', [], ['context' => 'The context']);
+// Bad
+$c = ['context' => 'The context']
+$this->t('A string to be translated', [], $c);
+$this->t('Another string', [], $c);
+
We could consider writing a custom sniff or PHPStan rule to enforce this
+For covering the functionality of scanning the code we had two potential +projects that could solve the case:
+ +Both projects can scan the codebase and generate a .po
or .pot
file with the
+translation strings and context.
At first it made most sense to go for Potx since it is used by
+localize.drupal.org and it has a long history.
+But Potx is extracting strings to a .pot
file without having the possibility
+of filling in the existing translations. So we ended up using Potion which can
+fill in the existing strings.
A flip side using Potion is that it is not being maintained anymore. But it +seems quite stable and a lot of work has been put into it. We could consider to +back it ourselves.
+We considered the following alternatives:
+Events make up an important part of the overall activities the Danish Public +Libraries. They aim to promote information, education and cultural activity. +Events held at the library have many different formats like book readings, +theater for children, tutoring and exhibitions of art. Some of these events +are singular and some are recurring.
+We define a recurring event as an event that occurs more than once over the +course of a period of time where the primary different between each occurrence +of the event is the event start and end time.
+A simple solution to this would be to use a multi-value date range field but +there are a number of factors that makes this more challenging:
+We have decided to base our solution on the Recurring Events Drupal module.
+The purpose of the module overlaps with our need in regards to handling +recurring events. The module is based on a construct where a recurring event +consists of an event series entity which has one or more related event +instances. Each instance corresponds to an specific date when the event +occurs.
+The module solves our requirements the following way:
+The module supports creating shedules for events with daily, weekly, monthly, +and annual repetitions. Each frequency can be customized, for example, which +days of the week the weekly event should repeat on (e.g., every Tuesday and +Thursday), which days of the month events should repeat on (e.g., the first +Wednesday, the third Friday).
+Event series and instances are fieldable entities. The module relies on the +Field Inheritance Drupal module +which allows data to be set on event series and reuse it on individual +entities.
+Recurring events support exceptions in two ways:
+The Field Inheritance module supports different modes of reuse. By using the +fallback method we can allow editors override values from event series on +individual instances.
+Recurring events creates a relationship between an event series and individual +instances. Through this relationsship we can determine what other instances +might be for an individual instance.
+It is possible to create lists of individual instances of events using Views.
+Recurring events uses a lot of vertical screen real estate on form elements +needed to define schedules (recurrence type, date/time, schedule, excluded +dates).
+The module supports defining event duration (30 minutes, 1 hour, 2 hours). +This is simpler than having to potentially repeat date and time.
+Recurring events lists six maintainers on Drupal.org and is supported by +three companies. Among them is Lullabot, a well known company within the +community.
+The module has over 1.000 sites reported using the module. The latest +version, 2.0.0-rc16, was recently released on December 1th 2023.
+The dependency, Field Inheritance, currently requires two patches to +Drupal Core.
+By introducing new entity types for event series and instance has some +consequences:
+For future work related to events we have to consider when it is appropriate to +use event series and when to use event instances.
+To create a consistent data structure for events we have to use recurring +events - even for singular events.
+We may have to do work to improve the editorial experience. This should +preferably be upstreamed to the Recurring Events module.
+Going forward we will have to participate in keeping Field Inheritance patches +to Drupal Core updated to match future versions until they are merged.
+In the process two alternative Drupal modules were considered:
+The translation system described in +adr-009-translation-system +handles solely the translations added through Drupals traditional translation handling.
+But there was a wish for being able to translate configuration related strings +as well. This includes titles, labels, descriptions, +and other elements in the admin area.
+We needed to find a way to handle translation of that kind as well.
+We went for a solution where we activated the Configuration Translation +Drupal core module and added the Configuration Translation PO contrib module.
+And we added a range of custom drush commands to handle the various +configuration translation tasks.
+By sticking to the handling of PO files in configuration handling that we are +already using in our general translation handling, +we can keep the current Github workflows with some alterations.
+Unfortunately handling translations of configuration on local sites +is still as difficult as before. +Translation of configuration texts cannot be found in the standard UI strings +translation list.
+With the config translation PO files added we tried to uncover if POEditor was +able to handle two PO files simultaneously in both import and export context. +It could not.
+But we still needed, in Drupal, to be able to import two different files: +One for general translations and one for configuration translations.
+We came up with the idea that we could merge the two files going when importing +into POEditor and split it again when exporting from POEditor.
+We tried it out and it worked so that was the solution we ended up with.
+We could activate only the config_translate module and add a danish translations +in config/sync. +But then:
+We could keep the machine names of the config in English but write the titles, +labels, descriptions in Danish.
+But that would have the following bad consequences:
+Change the Potion module to be able to scan configuration translations as well.
+We did not have a clear view of the concept of localizing configuration +translations in the same manner as the Potion module scans the codebase. +It could either be cumbersome to get the two worlds to meet in the same Potion +functionalities or simply incompatible.
+ + + + + + + + + + + + + +DPL CMS exposes data and functionality through HTTP endpoints which are +documented by an OpenAPI specification as described in +a previous ADR.
+Over time this API may need to be updated as the amount of data and +functionality increases and changes. Handling changes is an important aspect +of an API as such changes may affect third parties consuming this API.
+We use URI versioning of the API exposed +by DPL CMS.
+This is a simple approach to versioning which works well with the +RESTful Web Services Drupal module +that we use to develop HTTP endpoints with. Through the specification of the +paths provided by the endpoints we can define which version of an API the +endpoint corresponds to.
+When a breaking change is made the version of the API is increased by one e.g.
+from /api/v1/events
to /api/v2/events
.
We consider the following changes breaking:
+The following changes are not considered breaking:
+The existing version will continue to exist.
+Header based versioning is used by +other systems exposing REST APIs +in the infrastructure of the Danish Public Libraries. However we cannot see +this approach working well with the RESTful Web Services Drupal module. +It does not deal with multiple versions of an endpoint which different +specifications.
+Versionless GraphQL APIs are a common practice +Drupal can support GraphQL through a third party module +but using this would require us to reverse our approach to API development and +specification.
+Based on this approach we can provide updated versions of our API by leveraging +our existing toolchain.
+ + + + + + + + + + + + + +In the DPL CMS, we integrate React applications within a Drupal system as +outlined in a previous ADR. Effective +JavaScript error logging is crucial for timely detection, diagnosis, and +resolution of issues. It's essential that our logging setup not only captures +client-side errors efficiently but also integrates seamlessly with Drupal's +Watchdog logging system. This allows for logs to be forwarded to Grafana for +real-time monitoring and analysis, enhancing our system's reliability and +performance monitoring capabilities.
+After evaluating several options, we decided to integrate JSNLog +via the JSNLog Drupal module for +logging JavaScript errors. This integration allows us to capture and log +client-side errors directly from our React components into our server-side +logging infrastructure.
+The module does not have a stable release at the time of writing, which + poses risks regarding reliability and ongoing support.
+During testing, it generated excessively large numbers of log entries, + which could overwhelm our logging infrastructure and complicate error analysis.
+Custom built solution:
+Significant development time and resources required to build, test, and + maintain the module.
+Lacks the community support and proven stability found in established + third-party solutions, potentially introducing risks in terms of long-term + reliability and scalability.
+Third-party services:
+DPL-CMS relies on two levels of caching. Standard Drupal Core caching, and +Varnish as an accelerating HTTP cache.
+The Drupal Core cache uses Redis as its storage backend. This takes the load off +of the database-server that is typically shared with other sites.
+Further more, as we rely on Varnish for all caching of anonymous traffic, the +core Internal Page Cache module has been disabled.
+Varnish uses the standard Drupal VCL from lagoon.
+The site is configured with the Varnish Purge module and configured with a +cache-tags based purger that ensures that changes made to the site, is purged +from Varnish instantly.
+The configuration follows the Lagoon best practices - reference the +Lagoon documentation on Varnish +for further details.
+ + + + + + + + + + + + + +The following guidelines describe best practices for developing code for the DPL +CMS project. The guidelines should help achieve:
+Contributions to the core DPL CMS project will be reviewed by members of the +Core team. These guidelines should inform contributors about what to expect in +such a review. If a review comment cannot be traced back to one of these +guidelines it indicates that the guidelines should be updated to ensure +transparency.
+The project follows the Drupal Coding Standards +and best practices for all parts of the project: PHP, JavaScript and CSS. This +makes the project recognizable for developers with experience from other Drupal +projects. All developers are expected to make themselves familiar with these +standards.
+The following lists significant areas where the project either intentionally +expands or deviates from the official standards or areas which developers should +be especially aware of.
+js-
.
+ Example: <div class="gallery js-gallery">
- .gallery
must only be used in
+ CSS, js-gallery
must only be used in JS.is-active
, has-child
or similar are classes that can be used as
+ an interlink between JS and CSS.gallery.scss
which contains
+ .gallery, .gallery__container, .gallery__*
and so on. Avoid referencing
+ other components when possible.component.scss
, there must be a comment at the top, describing
+ the purpose of the component..pane-content
. Use
+ the Drupal theme system to write custom HTML and use precise, descriptive
+ class names. It is better to have several class names on the same element,
+ rather than reuse the same class name for several components.$layout-width * 0.60
) or give it a descriptive variable name
+ ($side-bar-width // 350px works well with the current $layout-width_
).class &
). The use of parent selector
+ results in complex deeply nested code which is very hard to maintain. There
+ are times where it makes sense, but for the most part it can and should be
+ avoided.dpl
.dpl
prefix is not required for modules which provide functionality deemed
+ relevant outside the DPL community and are intended for publication on
+ Drupal.org.Files provided by modules must be placed in the following folders and have the +extensions defined here.
+MODULENAME.*.yml
MODULENAME.module
MODULENAME.install
templates/*.html.twig
src/**/*.php
tests/**/*.php
/css/COMPONENTNAME.css
/scss/COMPONENTNAME.scss
js/*.js
img/*.(png|jpeg|gif|svg)
Programmatic elements such as settings, state values and views modules must +comply to a set of common guidelines.
+As there is no finite set of programmatic elements for a DPL CMS site these +apply to all types unless explicitly specified.
+The project follows the code structure suggested by the +drupal/recommended-project Composer template.
+Modules, themes etc. must be placed within the corresponding folder in this +repository. If a module developed in relation to this project is of general +purpose to the Drupal community it should be placed on Drupal.org and included +as an external dependency.
+A module must provide all required code and resources for it to work on its own +or through dependencies. This includes all configuration, theming, CSS, images +and JavaScript libraries.
+All default configuration required for a module to function should be +implemented using the Drupal configuration system and stored in the version +control with the rest of the project source code.
+If an existing module is expanded with updates to current functionality the +default behavior must be the same as previous versions or as close to this as +possible. This also includes new modules which replaces current modules.
+If an update does not provide a way to reuse existing content and/or +configuration then the decision on whether to include the update resides with +the business.
+Modules which alter or extend functionality provided by other modules should use +appropriate methods for overriding these e.g. by implementing alter hooks or +overriding dependencies.
+All interface text in modules must be in English. Localization of such texts +must be handled using the Drupal translation API.
+All interface texts must be provided with a context. This supports separation +between the same text used in different contexts. Unless explicitly stated +otherwise the module machine name should be used as the context.
+The project uses package managers to handle code which is developed outside of +the Core project repository. Such code must not be committed to the Core project +repository.
+The project uses two package manages for this:
+When specifying third party package versions the project follows these +guidelines:
+The project uses patches rather than forks to modify third party packages. This +makes maintenance of modified packages easier and avoids a collection of forked +repositories within the project.
+Code may return null or an empty array for empty results but must throw +exceptions for signalling errors.
+When throwing an exception the exception must include a meaningful error message +to make debugging easier. When rethrowing an exception then the original +exception must be included to expose the full stack trace.
+When handling an exception code must either log the exception and continue +execution or (re)throw the exception - not both. This avoids duplicate log +content.
+Drupal modules must use the Logging API. +When logging data the module must use its name as the logging channel and +an appropriate logging level.
+Modules integrating with third party services must implement a Drupal setting +for logging requests and responses and provide a way to enable and disable this +at runtime using the administration interface. Sensitive information (such as +passwords, CPR-numbers or the like) must be stripped or obfuscated in the logged +data.
+Code comments which describe what an implementation does should only be used +for complex implementations usually consisting of multiple loops, conditional +statements etc.
+Inline code comments should focus on why an unusual implementation has been +implemented the way it is. This may include references to such things as +business requirements, odd system behavior or browser inconsistencies.
+Commit messages in the version control system help all developers understand the +current state of the code base, how it has evolved and the context of each +change. This is especially important for a project which is expected to have a +long lifetime.
+Commit messages must follow these guidelines:
+When creating a pull request the pull request description should not contain any +information that is not already available in the commit messages.
+Developers are encouraged to read How to Write a Git Commit Message +by Chris Beams.
+The project aims to automate compliance checks as much as possible using static +code analysis tools. This should make it easier for developers to check +contributions before submitting them for review and thus make the review process +easier.
+The following tools pay a key part here:
+In general all tools must be able to run locally. This allows developers to get +quick feedback on their work.
+Tools which provide automated fixes are preferred. This reduces the burden of +keeping code compliant for developers.
+Code which is to be exempt from these standards must be marked accordingly in +the codebase - usually through inline comments (Eslint, +PHP Codesniffer). +This must also include a human readable reasoning. This ensures that deviations +do not affect future analysis and the Core project should always pass through +static analysis.
+If there are discrepancies between the automated checks and the standards +defined here then developers are encouraged to point this out so the automated +checks or these standards can be updated accordingly.
+ + + + + + + + + + + + + +Setting up a new site for testing certain scenarios can be repetitive. To avoid +this the project provides a module: DPL Config Import. This module can be used +to import configuration changes into the site and install/uninstall modules in a +single step.
+The configuration changes are described in a YAML file with configuration entry +keys and values as well as module ids to install or uninstall.
+/admin/config/configuration/import
The yaml file has two root elements configuration
and modules
.
A basic file looks like this:
+configuration:
+ # Add keys for configuration entries to set.
+ # Values will be merged with existing values.
+ system.site:
+ # Configuration values can be set directly
+ slogan: 'Imported by DPL config import'
+ # Nested configuration is also supported
+ page:
+ # All values in nested configuration must have a key. This is required to
+ # support numeric configuration keys.
+ 403: '/user/login'
+
+modules:
+ # Add module ids to install or uninstall
+ install:
+ - menu_ui
+ uninstall:
+ - redis
+
We use the Configuration Ignore Auto module +to manage configuration.
+In general all configuration is ignored except for configuration which should +explicitly be managed by DPL CMS core.
+Configuration management for DPL CMS is a complex issue. The complexity stems +from the following factors:
+There are multiple types of DPL CMS sites all using the same code base:
+All these site types must support the following properties:
+This can be split into different types of configuration:
+drush site-install --existing-config -y
drush config-export -y
config_ignore.settings.ignored_config_entities
with the ~
prefixconfig_ignore.settings
+ filedrush config-export -y
NB: The keys for these configuration files should already be in
+config_ignore.settings.ignored_config_entities
.
drush config-export -y
dpl_update.install
.function dpl_update_update_9000() {
+ \Drupal::service('module_installer')->install(['shortcut']);
+}
+
drush updatedb -y
drush config-export -y
function dpl_cms_update_9001() {
+ \Drupal::service('module_installer')->uninstall(['shortcut']);
+}
+
drush updatedb -y
drush config-export -y
drush deploy
NB: It is important that the official Drupal deployment procedure is followed. +Database updates must be executed before configuration is imported. Otherwise +we risk ending up in a situation where the configuration contains references +to modules which are not enabled.
+ + + + + + + + + + + + + + +We use the Recurring Events and Field Inheritance modules to manage events.
+This creates two fieldable entity types, eventseries
and eventinstance
,
+which we customize to our needs. We try to follow a structured approach when
+setting up setup of fields between the two entities.
eventseries
entity on /admin/structure/events/series/types/eventseries_type/default/edit/fields
eventinstance
entity on /admin/structure/events/instance/types/eventinstance_type/default/edit/fields
.eventseries
including type,
+ label, machine name, number of values etc.
+ Exception: The field on the eventinstance
entity must not be
+ required. Otherwise the Fallback strategy will not work.eventseries
to eventinstance
on /admin/structure/field_inheritance
+ with the label "Event [field name]" e.g. "Event tags" and the "Fallback"
+ inheritance strategy.eventseries
, default
and the machine name for the field as the
+ source.eventinstance
, default
and the machine name for the field as the
+ destination.eventseries
on /admin/structure/events/series/types/eventseries_type/default/edit/display
eventinstance
on /admin/structure/events/instance/types/eventinstance_type/default/edit/display
eventinstance
in the same
+ way as the source field on eventseries
.field--field-tags.html.twig
field--eventinstance--event-tags.html.twig
.{{ include('field--field-tags.html.twig') }}
Events make up an important part of the overall activities the Danish Public +Libraries. One business aspect of these events is ticketing. Municipalities +in Denmark use different external vendors for handling this responsibility +which includes functionalities such payment, keeping track of availability, +validation, seating etc.
+On goal for libraries is to keep staff workflows as simple as possible and +avoid duplicate data entry. To achieve this DPL CMS exposes data and +functionality as a part of the public API of the system.
+The public API for DPL CMS is documented through an OpenAPI 2.0 specification.
+The following flow diagram represents a suggested approach for synchronizing +event data between DPL CMS and an external system.
+ +sequenceDiagram
+ Actor EventParticipant
+ Participant DplCms
+ Participant ExternalSystem
+ ExternalSystem ->> DplCms: Retrieve all events
+ activate ExternalSystem
+ activate DplCms
+ DplCms ->> ExternalSystem: List of all publicly available events
+ deactivate DplCms
+ ExternalSystem ->> ExternalSystem: (Optional) Filter out any events that have not been marked as relevant (ticket_manager_relevance)
+ ExternalSystem ->> ExternalSystem: Identify new events by UUID and create them locally
+ ExternalSystem ->> DplCms: Update events with external urls
+ ExternalSystem ->> ExternalSystem: Identify existing events by UUID and update them locally
+ ExternalSystem ->> ExternalSystem: Identify local events with UUID which are<br/>not represented in the list and delete them locally
+ deactivate ExternalSystem
+ Note over DplCms,ExternalSystem: Time passes
+ EventParticipant -->> DplCms: View event
+ EventParticipant -->> DplCms: Purchase ticket
+ DplCms -->> EventParticipant: Refer to external url
+ EventParticipant -->> ExternalSystem: Purchase ticket
+ activate ExternalSystem
+ ExternalSystem -->> EventParticipant: Ticket
+ ExternalSystem ->> DplCms: Update event with state e.g. "sold out"
+ deactivate ExternalSystem
+
+
+An external system which intends to integrate with events is setup in the same +way as library staff. It is represented by a Drupal user and must be assigned +an appropriate username, password and role by a local administrator for the +library. This information must be communicated to the external system through +other secure means.
+The external system must authenticate through HTTP basic auth +using this information when updating events.
+Please read the related ADR for how +we handle API versioning.
+ + + + + + + + + + + + + +We use the Default Content module +to manage example content. Such content is typically used when setting up +development and testing environments.
+All actual example content is stored with the DPL Example Content module.
+Usage of the module in this project is derived from the official documentation.
+dpl_example_content.info.yml
filedrush default-content:export-module dpl_example_content
web/modules/custom/dpl_example_content
drush default-content:export-module dpl_example_content
web/modules/custom/dpl_example_content
The documentation in this folder describes how to develop DPL CMS.
+The focus of the documentation is to inform developers of how to develop the +CMS, and give some background behind the various architectural choices.
+The documentation falls into two categories:
+The Markdown-files in this +directory document the system as it. Eg. you can read about how to add a new +entry to the core configuration.
+The ./architecture folder contains our +Architectural Decision Records +that describes the reasoning behind key architecture decisions. Consult these +records if you need background on some part of the CMS, or plan on making any +modifications to the architecture.
+As for the remaining files and directories
+task
to list available tasks.We strive to keep the diagrams and illustrations used in the documentation as +maintainable as possible. A big part of this is our use of programmatic +diagramming via PlantUML and Open Source based +manual diagramming via diagrams.net (formerly +known as draw.io).
+When a change has been made to a *.puml
or *.drawio
file, you should
+re-render the diagrams using the command task render
and commit the result.
We use the Lagoon application delivery platform to +host environments for different stages of the DPL CMS project. Our Lagoon +installation is managed by the DPL Platform project.
+One such type of environment is pull request environments. +These environments are automatically created when a developer creates a pull +request with a change against the project and allows developers and project +owners to test the result before the change is accepted.
+Accessing the administration interface for a pull request environment may be +needed to test certain functionalities. This can be achieved in two ways:
+task lagoon:drush:uli
lagoon config add \
+ --lagoon [instance name e.g. "dpl-platform"] \
+ --hostname [host to connect to with SSH] \
+ --port [SSH port] \
+ --graphql [url to GraphQL endpoint] \
+ --ui [url to UI] \
+
The .lagoon.yml
has an environments section where it is possible to control
+various settings.
+On root level you specify the environment you want to address (eg.: main
).
+And on the sub level of that you can define the cron settings.
+The cron settings for the main branch looks (in the moment of this writing)
+like this:
environments:
+ main:
+ cronjobs:
+ - name: drush cron
+ schedule: "M/15 * * * *"
+ command: drush cron
+ service: cli
+
If you want to have cron running on a pull request environment, you have to +make a similar block under the environment name of the PR. +Example: In case you would have a PR with the number #135 it would look +like this:
+environments:
+ pr-135:
+ cronjobs:
+ - name: drush cron
+ schedule: "M/15 * * * *"
+ command: drush cron
+ service: cli
+
This way of making sure cronb is running in the PR environments is +a bit tedious but it follows the way Lagoon is handling it. +A suggested workflow with it could be:
+.lagoon.yml
configuration block connected to the current PR #In order to run local development you need:
+go-task
VIRTUAL_HOST
environment variables for Docker
+ containers. Examples: Dory (OSX) or
+ nginx-proxy
.If you are using MacOS, you can use the standard +"Docker for Mac" +app, however there has been developers that have experienced it acting slow.
+Alternatively, you can use Orbstack, which is a +direct replacement for Docker for Mac, but is optimized to run much faster.
+If you do end up using Docker for Mac, it is +recommended to use VirtioFS on the mounted +volumes in docker-compose, to speed up the containers.
+ +Prerequisites:
+For performance reasons XDebug is disabled by default. It can be enabled +temporarily through a task:
+task dev:enable-xdebug
enter
in the terminal where you enabled XDebug.
+ This will disable XDebugPrerequisites:
+jq
installed locallyRun the following command to retrieve the latest backup of database and files +from a Lagoon project:
+ +Prerequisites:
+The following describes how to first fetch a database-dump and then import the +dump into a running local environment. Be aware that this only gives you the +database, not any files from the site.
+.sql
file.task dev:reset
task dev:restore:database
Prerequisites:
+The following describes how to first fetch a files backup package +and then replace the files in a local environment.
+If you need to get new backup files from the remote site:
+ ++4. Due to a UI bug you need to RELOAD the window and then it should be possible + to download the nginx package.
+ + +Replace files locally:
+.tar.gz
file.task dev:reset
task dev:restore:files
In a development context it is not very handy only +to be able to get the latest version of the main branch of dpl-react.
+So a command has been implemented that downloads the specific version +of the assets and overwrites the existing library.
+You need to specify which branch you need to get the assets from. +The latest HEAD of the given branch is automatically build by Github actions +so you just need to specify the branch you want.
+It is used like this:
+ +Example:
+ + + + + + + + + + + + + + +We use logging to provide visibility into application behavior and related +user activitiesm to supports effective troubleshooting and debugging. +When an error or issue arises, logs provide the context and history to help +identify what went wrong and where.
+We use the logging API provided by Drupal Core +and rely on third party modules to +expose log data to the underlying platform +in a format that is appropriate for consumption by the platform.
+We log the following events:
+The architecture of the system, where self-service actions carried out by +patrons is handled by JavaScript components running in the browser, +means that searching for materials, management of reservations and updating +patron information cannot be logged by the CMS.
+Each logged event contains the following information by default:
+In general sensitive information (such as passwords, CPR-numbers or the like) +must be stripped or obfuscated in the logged data. This is specified by our +coding guidelines. It is the +responsibility of developers to identify such issues during development and +peer review.
+The architecture of the system severely limits the access to sensitive data +during the execution of the project and thus reduces the general risk.
+The system uses the eight log severities specified by PHP-FIG PSR-3 +and Syslog RFC5424 as provided +by the Drupal logging API.
+Events logged with the severity error or higher are monitored at the platform +level and should be used with this in mind. Note that Drupal will log unchecked +exceptions as this level by default.
+ + + + + + + + + + + + + +A release of dpl-cms can be build by pushing a tag that matches the following +pattern:
+ +The actual release is performed by the Publish source
Github action which
+invokes task source:deploy
which in turn uses the tasks source:build
and
+source:push
to build and publish the release.
Using the action should be the preferred choice for building and publishing
+releases, but should you need to - it is possible to run the task manually
+given you have the necessary permissions for pushing the resulting source-image.
+Should you only need to produce the image, but not push it the task you can opt
+for just invoking the source:build
task.
You can override the name of the built image and/or the destination registry
+temporarily by providing a number of environment variables (see the
+Taskfile). To permanently change these configurations, eg. in
+a fork, change the defaults directly in the Taskfile.yml
.
We manage translations as a part of the codebase using .po
translation files.
+Consequently translations must be part of either official or local translations
+to take effect on the individual site.
DPL CMS is configured to use English as the master language but is configured +to use Danish for all users through language negotiation. This allows us to +follow a process where English is the default for the codebase but actual usage +of the system is in Danish.
+To make the "translation traffic" work following components are being used:
+.po
files in git with
+ translatable strings and translations*.config.po
file*.po
and *.config.po
into a *.combined.po
file*.combined.po
is split into two files: *.po
and *.config.po
*.po
files are published to GitHub Pages.po
files on GitHub.po
+ files can be consumed.The following diagram show how these systems interact to support the flow of +from introducing a new translateable string in the codebase to DPL CMS consuming +an updated translation with said string.
+case
+sequenceDiagram
+ Actor Translator
+ Actor Developer
+ Developer ->> Developer: Open pull request with new translatable string
+ Developer ->> GitHubActions: Merge pull request into develop
+ GitHubActions ->> GitHubActions: Scan codebase and write strings to .po file
+ GitHubActions ->> GitHubActions: Fill .po file with existing translations
+%% <!-- markdownlint-disable-next-line MD013 -->
+ Note over GitHubActions,GitHubActions: If config translations<br/>are available<br/>they are used<br/>otherwise empty strings
+%% <!-- markdownlint-disable-next-line MD013 -->
+ GitHubActions ->> GitHubActions: Exports configuration translations into a .config.po file
+%% <!-- markdownlint-disable-next-line MD013 -->
+ GitHubActions ->> GitHubActions: The two .po files are merged together into a .combined.po file
+ GitHubActions ->> GitHub: Commit combined.po file with updated strings
+ GitHubActions ->> POEditor: Call webhook
+ POEditor ->> GitHub: Fetch updated combined.po file
+ POEditor ->> POEditor: Synchronize translations with latest strings and translations
+ Translator ->> POEditor: Translate strings
+ Translator ->> POEditor: Export strings to GitHub
+ POEditor ->> GitHub: Commit combined.po file with updated translations to develop
+ GitHub ->> GitHub: .combined.po is split into two files: .po and .config.po
+ GitHub ->> GitHub: All the po files are published to Github Pages
+ DplCms ->> GitHub: Fetch .po file with latest translations
+ DplCms ->> DplCms: Import updated translations
+ DplCms ->> GitHub: Import config.po file with latest configuration translations
+dpl-cms
projectdrush locale-check
drush locale-update
Run drush dpl_po:import-remote-config-po [LANGUAGE_CODE] [CONFIGURATION_PO_FIL_EXTERNAL_URL]
Example:
+ + + + + + + + + + + + + + +This folder contains SVG icons that are used in the design system. When using +icons from this folder, please follow the guidelines below:
+Icons located in public/ folder is the current source of truth. All icons
+will be placed in this folder. Ensure that icons placed gere are using the
+class
attribute, and not className
for SVG element.
For icons used in React components that require class modifications or
+similar, first create the icon in public/icons, and then manually copy the SVG
+from the public/icons
folder and make any necessary changes, i.e replacing
+the class
with className
or other modifications.
Currently, there is no automated solution for this process that ensures +consistency for this. This may be changed in the future.
+ + + + + + + + + + + + + +In the work of trying to improve the performance of the search results +we needed a way to fill the viewport with a simulated interface in order to:
+We decided to implement skeleton screens when loading data. The skeleton screens +are rendered in pure css. +The css classes are coming from the library: skeleton-screen-css
+The library is very small and based on simple css rules, so we could have +considered replicating it in our own design system or make something similar. +But by using the open source library we are ensured, to a certain extent, +that the code is being maintained, corrected and evolves as time goes by.
+We could also have chosen to use images or GIF's to render the screens. +But by using the simple toolbox of skeleton-screen-css we should be able +to make screens for all the different use cases in the different apps.
+It is now possible, with a limited amount of work, to construct skeleton screens +in the loading state of the various user interfaces.
+Because we use library where skeletons are implemented purely in CSS +we also provide a solution which can be consumed in any technology +already using the design system without any additional dependencies, +client side or server side.
+Because we want to use existing styling setup in conjunction +with the Skeleton Screen Classes we sometimes need to ignore the existing +BEM rules that we normally comply to. +See eg. the search result styling.
+ + + + + + + + + + + + + +There are various types of forms within the project, and it is always a dilemma +as to whether to write specific styling per form, or to create a common set of +base classes.
+We have decided to create a set of default classes
to be used when building
+different kinds of forms, as to not create a large amount of location that
+contain form styling. Considering the forms within the project all look very
+similar/consist of elements that look the same, it will be an advantage to have
+a centralized place to expand/apply future changes to.
As we follow the BEM class structure, the block is called dpl-form
, which can
+be expanded with elements, and modifiers.
We considered writing new classes every time we introduced a new form, however, +this seemed like the inferior option. If a specific form element was to change +styling in the future, we would have to adjust all of the specific instances, +instead of having a singular definition. And in case a specific instance needs +to adopt a different styling, it can be achieved by creating a specific class +fot that very purpose.
+As per this decision, we expect introduction of new form elements to be styled
+expanding the current dpl-form
class.
This currently has an exception in form of form inputs - these have been styled
+a long time ago and use the class dpl-input
.
Here is the link to our form css file.
+ + + + + + + + + + + + + +DPL Design System is a library of UI components that should be used as a common +base system for "Danmarks Biblioteker" / "Det Digitale Folkebibliotek". The +design is implemented +with Storybook +/ React and is output with HTML markup and css-classes +through an addon in Storybook.
+The codebase follows the naming that designers have used in Figma closely +to ensure consistency.
+This project comes with go-task and docker +compose, hence the requirements are limited to having docker install and tasks.
+This project can be used outside docker with the following requirements:
+node 16
yarn
Check in the terminal which versions you have installed with node -v
.
Use the tasks defined in Taskfile
to run the project:
Use the node package manager to install project dependencies:
+ +To start the docker compose setup in development simple use the start
task:
To see the output from the compile process and start of storybook:
+ +Use task
and tabulator key in the terminal to see the other predefined tasks:
To start developing run:
+ +Components and CSS will be automatically recompiled when making changes in the +source code.
+The project is available in two ways and should be consumed accordingly:
+dist.zip
file attached to a release for this repositoryBoth releases contain the built assets of the project: JavaScript files, CSS +styles and icons.
+You can find the HTML output for a given story under the HTML tab inside +storybook.
+The GitHub NPM package registry requires authentication if you are to access +packages there.
+Consequently, if you want to use the design system as an NPM package or if you +use a project that depends on the design system as an NPM package you must +authenticate:
+repo
and read:packages
npm login --registry=https://npm.pkg.github.com
> Username: [Your GitHub username]
+> Password: [Your GitHub token]
+> Email: [An email address used with your GitHub account]
+
Note that you will need to reauthenticate when your personal access token +expires.
+The project is automatically built and deployed +on pushes to every branch and every tag and the result is available as releases +which support both types of usage. This applies for the original +repository on GitHub and all GitHub forks.
+You can follow the status of deployments in the Actions list for the repository +on GitHub. +The action logs also contain additional details regarding the contents and +publication of each release. If using a fork then deployment actions can be +seen on the corresponding list.
+In general consuming projects should prefer tagged releases as they are stable +proper releases.
+During development where the design system is being updated in parallel with +the implementation of a consuming project it may be advantageous to use a +release tagging a branch.
+Run the following to publish a tag and create a release:
+ +In the consuming project update usage to the new release:
+ +Find the release for the tag on the releases page on GitHub
+and download the dist.zip
file from there and use it as needed in the
+consuming project.
The project automatically creates a release for each branch.
+Example: Pushing a commit to a new branch feature/reservation-modal
will
+create the following parts:
release-feature/reservation-modal
. A tag is needed
+ to create a GitHub release.feature-reservation-modal
.
+ Special characters like /
are not supported by npm tags and are converted
+ to -
.Updating the branch will update all parts accordingly.
+In the consuming project update usage to the new release:
+ +If your release belongs to a fork you can use aliasing +to point to the release of the package in the npm repository for the fork:
+npm config set @my-fork:registry=https://npm.pkg.github.com
+npm install @danskernesdigitalebibliotek/dpl-design-system@npm:@my-fork/dpl-design-system@feature-reservation-modal
+
This will update your package.json
and lock files accordingly. Note that
+branch releases use temporary versions in the format 0.0.0-[GIT-SHA]
and you
+may in practice see these referenced in both files.
If you push new code to the branch you have to update the version used in the +consuming project:
+ +Aliasing, +repository configuration and +updating installed packages +are also supported by Yarn.
+Find the release for the branch on the releases page on GitHub
+and download the dist.zip
file from there and use it as needed in the
+consuming project.
If your branch belongs to a fork then you can find the release on the releases +page for the fork.
+Repeat the process if you push new code to the branch.
+Spin up storybook by running this command in the terminal:
+ +When storybook is ready it automatically opens up in a browser with the +interface ready to use.
+We are using Chromatic for visual test. You can access the dashboard
+under the danskernesdigitalebibliotek
(organisation) dpl-design-system
+(project).
https://www.chromatic.com/builds?appId=616ffdab9acbf5003ad5fd2b
+You can deploy a version locally to Chromatic by running:
+ +Make sure to set the CHROMATIC_PROJECT_TOKEN
environment variable is available
+in your shell context. You can access the token from:
https://www.chromatic.com/manage?appId=616ffdab9acbf5003ad5fd2b&view=configure
+Storybook is an +open source tool for building UI components and pages in isolation from your +app's business logic, data, and context. Storybook helps you document components +for reuse and automatically visually test your components to prevent bugs. It +promotes the component-driven process and agile +development.
+It is possible to extend Storybook with an ecosystem of addons that help you do +things like fine-tune responsive layouts or verify accessibility.
+The Storybook interface is simple and intuitive to use. Browse the project's +stories now by navigating to them in the sidebar.
+The stories are placed in a flat structure, where developers should not spend +time thinking of structure, since we want to keep all parts of the system under +a heading called Library. This Library is then dividid in folders where common +parts are kept together.
+To expose to the user how we think these parts stitch together for example +for the new website, we have a heading called Blocks, to resemble what cms +blocks a user can expect to find when building pages in the choosen CMS.
+This could replicate in to mobile applications, newsletters etc. all pulling +parts from the Library.
+Each story has a corresponding .stories
file. View their code in the
+src/stories
directory to learn how they work.
+The stories
file is used to add the component to the Storybook interface via
+the title. Start the title with "Library" or "Blocks" and use / to
+divide into folders fx. Library / Buttons / Button
Storybook ships with some essential +pre-installed addons +to power the core Storybook experience.
+ +There are many other helpful addons to customise the usage and experience. +Additional addons used for this project:
+HTML / storybook-addon-html: + This addon is used to display compiled HTML markup for each story and make it + easier for developers to grab the code. Because we are developing with React, + it is necessary to be able to show the HTML markup with the css-classes to + make it easier for other developers that will implement it in the future. + If a story has controls the HTML markup changes according to the controls that + are set.
+Designs / storybook-addon-designs: + This addon is used to embed Figma in the addon panel for a better + design-development workflow.
+A11Y: + This addon is used to check the accessibility of the components.
+All the addons can be found in storybook/main.js
directory.
To display some components (fx Colors, Spacing) in a more presentable way, we
+are using some "internal" css-classes which can be found in the
+styles/internal.scss
file. All css-classes with "internal" in the front should
+therefore be ignored in the HTML markup.
This documentation provides detailed information on the CSS/SCSS standards for +the 'formidling' project. It covers the usage of mixins for layout and spacing, +as well as guidelines for responsive design.
+This document and its standards apply specifically to elements of the +'formidling' project starting from December 1, 2023, and onwards. It has been +created to aligns with the recent developments in the project and is not +retroactively applied to earlier components or structures.
+Layout related SCSS is defined in the variables.layout.scss
file.
If anything in variables.layout.scss
is modified or extended, make sure you
+include your changes in this documentation.
The file defines several variables for managing layout and spacing:
+$layout__max-width--*
: These variables store the maximum width values
+aligned with breakpoints.block__max-width--*
: These variables store the maximum width values
+between block (paragraph) elements.$layout__edge-spacing
: This variable stores the edge spacing (padding)
+value for containers.All components that require a max-width should use the block__max-width--*
+variables along with layout-container.
The file defines two mixins:
+layout-container($max-width, $padding)
: This mixin sets the maximum width,
+ and padding of an element, as well as centering them.
block-spacing($modifier)
: This manages the vertical margins of
+elements, allowing for consistent spacing throughout the project. Use $modifer
+to create negative/sibling styles.
Vertical padding is not a part of layout.scss. Use regular $spacing
+variables for that.
block-spacing($modifier)
is an approach use for the CMS, where all
+paragraphs are rendered with a wrapper that will automatically add necessary
+spacing between the components. Any component that is not a paragraph, should
+therefore follow this approach by including the mixin @mixin block-spacing
The $modfier
is currently a either
sibling
used for add a -1px to remove borders between sibling elements.negative
used for elements that require -margin instead.Here are some examples of how to use these mixins, and utility classes.
+// Using mixins in your BEM-named parent container.
+.your-BEM-component-name {
+ @include layout-container;
+
+ @include media-query__small() {
+ // Applying new edge spacing (Padding) using $spacings / other.
+ @include layout-container($padding: $s-xl);
+ }
+
+ @include media-query__large() {
+ // Removing max-width & applying specific padding from $spacings / other
+ @include layout-container($max-width: 0, $padding: $s-md);
+ }
+}
+
In December 2023, we have aimed to streamline the way we write SCSS. +Some of these rules have not been applied on previous code, but moving forward, +this is what we aim to do.
+Assuming we have a Counter block:
+counter.scss
.counter__title
✅&__title
❌ (&__
should be avoided, to avoid massive indention.).counter-title
❌ (Must start with .FILE-NAME__
).counter__title__text
❌ (Only one level)Sometimes you'll want to add variants to CSS-only classes. This can be done
+using modifier classes - e.g. .counter--large
, .counter__title--large
.
+These classes must not be set alone. E.g. .counter__title--large
must not
+exist on an element without also having .counter__title
.
Shared tooling is saved in src/styles/scss/tools, +NOT in individual stories.
+Typography is defined in
+src/styles/scss/tools/variables.typography.scss.
+These variables, all starting with $typo__
, can be used, using a mixin,
+@include typography($typo__h2);
in stories.
+Generally speaking, font styling should be avoided directly in stories, rather
+adding new variants in the variables.typography.scss
file.
+This way, we can better keep track of what is available, and avoid duplicate
+styling in the future.
In the future, we want to apply these rules to old code too. Until then, +the old classes are supported using the files +in src/styles/scss/legacy.
+These classes should not be used in new components
+ + + + + + + + + + + + + +The Danish Libraries needed a platform for hosting a large number of Drupal +installations. As it was unclear exactly how to build such a platform and how +best to fulfill a number of requirements, a Proof Of Concept project was +initiated to determine whether to use an existing solution or build a platform +from scratch.
+After an evaluation, Lagoon was chosen.
+The main factors behind the decision to use Lagoon where:
+When using and integrating with Lagoon we should strive to
+We do this to keep true to the initial thought behind choosing Lagoon as a +platform that gives us a lot of functionality for a (comparatively) small +investment.
+The main alternative that was evaluated was to build a platform from scratch. +While this may have lead to a more customized solution that more closely matched +any requirements the libraries may have, it also required a very large +investment would require a large ongoing investment to keep the platform +maintained and updated.
+We could also choose to fork Lagoon, and start making heavy modifications to the +platform to end up with a solution customized for our needs. The downsides of +this approach has already been outlined.
+ + + + + + + + + + + + + ++The platform is required to be able to handle an estimated 275.000 page-views +per day spread out over 100 websites. A visit to a website that causes the +browser to request a single html-document followed by a number of assets is only +counted as a single page-view.
+On a given day about half of the page-views to be made by an authenticated +user. We further more expect the busiest site receive about 12% of the traffic.
+Given these numbers, we can make some estimates of the expected average load. +To stay fairly conservative we will still assume that about 50% of the traffic +is anonymous and can thus be cached by Varnish, but we will assume that all +all sites gets traffic as if they where the most busy site on the platform (12%).
+12% of 275.000 requests gives us a an average of 33.000 requests. To keep to the +conservative side, we concentrate the load to a period of 8 hours. We then end +up with roughly 1 page-view pr. second.
+The platform is hosted on a Kubernetes cluster on which Lagoon is installed. +As of late 2021, Lagoons approach to handling rightsizing of PHP- and in +particular Drupal-applications is based on a number factors:
+The consequence of the above is that we can pack a lot more workload onto a single +node than what would be expected if you only look at the theoretical maximum +resource requirements.
+Lagoon sets its resource-requests based on a helm values default in the
+kubectl-build-deploy-dind
image. The default is typically 10Mi pr. container
+which can be seen in the nginx-php chart which runs a php and
+nginx container. Lagoon configures php-fpm to allow up to 50 children
+and allows php to use up to 400Mi memory.
Combining these numbers we can see that a site that is scheduled as if it only +uses 20 Megabytes of memory, can in fact take up to 20 Gigabytes. The main thing +that keeps this from happening in practice is a a combination of the above +assumptions. No node will have more than a limited number of pods, and on a +given second, no site will have nearly as many inbound requests as it could have.
+Lagoon is a very large and complex solution. Any modification to Lagoon will +need to be well tested, and maintained going forward. With this in mind, we +should always strive to use Lagoon as it is, unless the alternative is too +costly or problematic.
+Based on real-live operations feedback from Amazee (creators of Lagoon) and the +context outline above we will
+As Lagoon does not give us any manual control over rightsizing out of the box, +all alternatives involves modifying Lagoon.
+We've inspected the process Lagoon uses to deploy workloads, and determined that +it would be possible to alter the defaults used without too many modifications.
+The build-deploy-docker-compose.sh
script that renders the manifests that
+describes a sites workloads via Helm includes a
+service-specific values-file. This file can be used to
+modify the defaults for the Helm chart. By creating custom container-image for
+the build-process based on the upstream Lagoon build image, we can deliver our
+own version of this image.
As an example, the following Dockerfile will add a custom values file for the +redis service.
+FROM docker.io/uselagoon/kubectl-build-deploy-dind:latest
+COPY redis-values.yaml /kubectl-build-deploy/
+
Given the following redis-values.yaml
The Redis deployment would request 100Mi instead of the previous default of 10Mi.
+Building upon the modification described in the previous chapter, we could go +even further and modify the build-script itself. By inspecting project variables +we could have the build-script pass in eg. a configurable value for +replicaCount for a pod. This would allow us to introduce a +small/medium/large concept for sites. This could be taken even further to eg. +introduce whole new services into Lagoon.
+This could lead to problems for sites that requires a lot of resources, but +given the expected average load, we do not expect this to be a problem even if +a site receives an order of magnitude more traffic than the average.
+The approach to rightsizing may also be a bad fit if we see a high concentration +of "non-spiky" workloads. We know for instance that Redis and in particular +Varnish is likely to use a close to constant amount of memory. Should a lot of +Redis and Varnish pods end up on the same node, evictions are very likely to +occur.
+The best way to handle these potential situations is to be knowledgeable about +how to operate Kubernetes and Lagoon, and to monitor the workloads as they are +in use.
+ + + + + + + + + + + + + +There has been a wish for a functionality that alerts administrators if certain +system values have gone beyond defined thresholds rules.
+We have decided to use alertmanager +that is a part of the Prometheus package that is +already used for monitoring the cluster.
+Lagoon requires a site to be deployed from at Git repository containing a +.lagoon.yml and docker-compose.yml +A potential logical consequence of this is that we require a Git repository pr +site we want to deploy, and that we require that repository to maintain those +two files.
+Administering the creation and maintenance of 100+ Git repositories can not be +done manually with risk of inconsistency and errors. The industry best practice +for administering large-scale infrastructure is to follow a declarative +Infrastructure As Code(IoC) +pattern. By keeping the approach declarative it is much easier for automation +to reason about the intended state of the system.
+Further more, in a standard Lagoon setup, Lagoon is connected to the "live" +application repository that contains the source-code you wish to deploy. In this +approach Lagoon will just deploy whatever the HEAD of a given branch points. In +our case, we perform the build of a sites source release separate from deploying +which means the sites repository needs to be updated with a release-version +whenever we wish it to be updated. This is not a problem for a small number of +sites - you can just update the repository directly - but for a large set of +sites that you may wish to administer in bulk - keeping track of which version +is used where becomes a challenge. This is yet another good case for declarative +configuration: Instead of modifying individual repositories by hand to deploy, +we would much rather just declare that a set of sites should be on a specific +release and then let automation take over.
+While there are no authoritative discussion of imperative vs declarative IoC, +the following quote from an OVH Tech Blog +summarizes the current consensus in the industry pretty well:
+++In summary declarative infrastructure tools like Terraform and CloudFormation +offer a much lower overhead to create powerful infrastructure definitions that +can grow to a massive scale with minimal overheads. The complexities of hierarchy, +timing, and resource updates are handled by the underlying implementation so +you can focus on defining what you want rather than how to do it.
+The additional power and control offered by imperative style languages can be +a big draw but they also move a lot of the responsibility and effort onto the +developer, be careful when choosing to take this approach.
+
We administer the deployment to and the Lagoon configuration of a library site
+in a repository pr. library. The repositories are provisioned via Terraform that
+reads in a central sites.yaml
file. The same file is used as input for the
+automated deployment process which renderers the various files contained in the
+repository including a reference to which release of DPL-CMS Lagoon should use.
It is still possible to create and maintain sites on Lagoon independent of this +approach. We can for instance create a separate project for the dpl-cms +repository to support core development.
+Accepted
+We could have run each site as a branch of off a single large repository. +This was rejected as a possibility as it would have made the administration of +access to a given libraries deployed revision hard to control. By using individual +repositories we have the option of grating an outside developer access to a +full repository without affecting any other.
+ + + + + + + + + + + + + +We have previously established (in ADR 004)
+that we want to take a declarative approach to site management. The previous
+ADR focuses on using a single declarative file, sites.yaml
, for driving
+Github repositories that can be used by Lagoon to run sites.
The considerations from that ADR apply equally here. We have identified more +opportunities to use a declarative approach, which will simultaneously +significantly simplify the work required for platform maintainers when managing +sites.
+Specifically, runbooks with several steps to effectuate a new deployment are a +likely source of errors. The same can be said of having to manually run +commands to make changes in the platform.
+Every time we run a command that is not documented in the source code in the
+platform main
branch, it becomes less clear what the state of the platform is.
Conversely, every time a change is made in the main
branch that has not yet
+been executed, it becomes less clear what the state of the platform is.
We continuously strive towards making the main
branch in the dpl-platform repo
+the single source of truth for what the state of the platform should be. The
+repository becomes the declaration of the entire state.
This leads to at least two concrete steps:
+We will automate synchronizing state in the platform-repo with the actual
+ platform state. This means using sites.yaml
to declare the expected state
+ in Lagoon (e.g. which projects and environments are created), not just the
+ Github repos. This leaves the deployment process less error prone.
We will automate running the synchronization step every time code is checked
+ into the main
branch. This means state divergence between the platform repo
+ declaration and reality is minimized.
It will still be possible for dpl-cms
to maintain its own area of state
+in the platform, but anything declared in dpl-platform
will be much more
+likely to be the actual state in the platform.
Proposed
+ + + + + + + + + + + + + +We loosely follow the guidelines for ADRs described by Michael Nygard.
+A record should attempt to capture the situation that led to the need for a +discrete choice to be made, and then proceed to describe the core of the +decision, its status and the consequences of the decision.
+To summaries a ADR could contain the following sections (quoted from the above +article):
+Title: These documents have names that are short noun phrases. For example, + "ADR 1: Deployment on Ruby on Rails 3.0.10" or "ADR 9: LDAP for Multitenant Integration"
+Context: This section describes the forces at play, including technological + , political, social, and project local. These forces are probably in tension, + and should be called out as such. The language in this section is value-neutral. + It is simply describing facts.
+Decision: This section describes our response to these forces. It is stated + in full sentences, with active voice. "We will …"
+Status: A decision may be "proposed" if the project stakeholders haven't + agreed with it yet, or "accepted" once it is agreed. If a later ADR changes + or reverses a decision, it may be marked as "deprecated" or "superseded" with + a reference to its replacement.
+Consequences: This section describes the resulting context, after applying + the decision. All consequences should be listed here, not just the "positive" + ones. A particular decision may have positive, negative, and neutral consequences, + but all of them affect the team and project in the future.
+We use the alertmanager which automatically +ties to the metrics of Prometheus but in order to make it work the configuration +and rules need to be setup.
+The configuration is stored in a secret:
+ +In order to update the configuration you need to get the secret resource definition
+yaml output and retrieve the data.alertmanager.yaml
property.
You need to base64 decode the value, update configuration with SMTP settings, +receivers and so forth.
+It is possible to set up various rules(thresholds), both on cluster level and for +separate containers and namespaces.
+Here is a site with examples of rules to get an idea of the +possibilities.
+We have tested the setup by making a configuration looking like this:
+Get the configuration form the secret as described above.
+Change it with smtp settings in order to be able to debug the alerts:
+global:
+ resolve_timeout: 5m
+ smtp_smarthost: smtp.gmail.com:587
+ smtp_from: xxx@xxx.xx
+ smtp_auth_username: xxx@xxx.xx
+ smtp_auth_password: xxxx
+receivers:
+- name: default
+- name: email-notification
+ email_configs:
+ - to: xxx@xxx.xx
+route:
+ group_by:
+ - namespace
+ group_interval: 5m
+ group_wait: 30s
+ receiver: default
+ repeat_interval: 12h
+ routes:
+ - match:
+ alertname: testing
+ receiver: email-notification
+ - match:
+ severity: critical
+ receiver: email-notification
+
Base64 encode the configuration and update the secret with the new configuration +hash.
+Find the cluster ip of the alertmanager service running (the service name can +possibly vary):
+ +And then run a curl command in the cluster (you need to find the IP o):
+# 1.
+kubectl run -i --rm --tty debug --image=curlimages/curl --restart=Never -- sh
+
+# 2
+curl -XPOST http://[ALERTMANAGER_SERVICE_CLUSTER_IP]:9093/api/v1/alerts \
+ -d '[{"status": "firing","labels": {"alertname": "testing","service": "curl",\
+ "severity": "critical","instance": "0"},"annotations": {"summary": \
+ "This is a summary","description": "This is a description."},"generatorURL": \
+ "http://prometheus.int.example.net/<generating_expression>",\
+ "startsAt": "2020-07-22T01:05:38+00:00"}]'
+
The DPL-CMS Drupal sites utilizes a multi-tier caching strategy. HTTP responses +are cached by Varnish and Drupal caches its various internal data-structures +in a Redis key/value store.
+Varnish will cache any http responses that fulfills the following requirements
+Refer the Lagoon drupal.vcl, +docs.lagoon.sh documentation on the Varnish service +and the varnish-drupal image +for the specifics on the service.
+Refer to the caching documentation in dpl-cms +for specifics on how DPL-CMS is integrated with Varnish.
+DPL-CMS is configured to use Redis as the backend for its core cache as an +alternative to the default use of the sql-database as backend. This ensures that + a busy site does not overload the shared mariadb-server.
+ + + + + + + + + + + + + +A DPL Platform environment consists of a range of infrastructure components +on top of which we run a managed Kubernetes instance into with we install a +number of software product. One of these is Lagoon +which gives us a platform for hosting library sites.
+An environment is created in two separate stages. First all required +infrastructure resources are provisioned, then a semi-automated deployment +process carried out which configures all the various software-components that +makes up an environment. Consult the relevant runbooks and the +DPL Platform Infrastructure documents for the +guides on how to perform the actual installation.
+This document describes all the parts that makes up a platform environment +raging from the infrastructure to the sites.
+All resources of a Platform environment is contained in a single Azure Resource +Group. The resources are provisioned via a Terraform setup +that keeps its resources in a separate resource group.
+The overview of current platform environments along with the various urls and +a summary of its primary configurations can be found the +Current Platform environments document.
+A platform environment uses the following Azure infrastructure resources.
+ +The Azure Kubernetes Service in return creates its own resource group that +contains a number of resources that are automatically managed by the AKS service. +AKS also has a managed control-plane component that is mostly invisible to us. +It has a separate managed identity which we need to grant access to any +additional infrastructure-resources outside the "MC" resource-group that we +need AKS to manage.
+The Platform consists of a number of software components deployed into the +AKS cluster. The components are generally installed via Helm, +and their configuration controlled via values-files.
+Essential configurations such as the urls for the site can be found in the wiki
+The following sections will describe the overall role of the component and how +it integrates with other components. For more details on how the component is +configured, consult the corresponding values-file for the component found in +the individual environments configuration +folder.
+ +Lagoon is an Open Soured Platform As A Service +created by Amazee. The platform builds on top of a +Kubernetes cluster, and provides features such as automated builds and the +hosting of a large number of sites.
+Kubernetes does not come with an Ingress Controller out of the box. An ingress- +controllers job is to accept traffic approaching the cluster, and route it via +services to pods that has requested ingress traffic.
+We use the widely used Ingress Nginx +Ingress controller.
+Cert Manager allows an administrator specify +a request for a TLS certificate, eg. as a part of an Ingress, and have the +request automatically fulfilled.
+The platform uses a cert-manager configured to handle certificate requests via +Let's Encrypt.
+Prometheus is a time series database used by the platform +to store and index runtime metrics from both the platform itself and the sites +running on the platform.
+Prometheus is configured to scrape and ingest the following sources
+Prometheus is installed via an Operator
+which amongst other things allows us to configure Prometheus and Alertmanager via
+ ServiceMonitor
and AlertmanagerConfig
.
Alertmanager handles +the delivery of alerts produced by Prometheus.
+Grafana provides the graphical user-interface +to Prometheus and Loki. It is configured with a number of data sources via its +values-file, which connects it to Prometheus and Loki.
+Loki stores and indexes logs produced by the pods + running in AKS. Promtail +streams the logs to Loki, and Loki in turn makes the logs available to the +administrator via Grafana.
+Each individual library has a Github repository that describes which sites
+should exist on the platform for the library. The creation of the repository
+and its contents is automated, and controlled by an entry in a sites.yaml
-
+file shared by all sites on the platform.
Consult the following runbooks to see the procedures for:
+ +sites.yaml
is found in infrastructure/environments/<environment>/sites.yaml
.
+The file contains a single map, where the configuration of the
+individual sites are contained under the property sites.<unique site key>
, eg.
yaml
+sites:
+ # Site objects are indexed by a unique key that must be a valid lagoon, and
+ # github project name. That is, alphanumeric and dashes.
+ core-test1:
+ name: "Core test 1"
+ description: "Core test site no. 1"
+ # releaseImageRepository and releaseImageName describes where to pull the
+ # container image a release from.
+ releaseImageRepository: ghcr.io/danskernesdigitalebibliotek
+ releaseImageName: dpl-cms-source
+ # Sites can optionally specify primary and secondary domains.
+ primary-domain: core-test.example.com
+ # Fully configured sites will have a deployment key generated by Lagoon.
+ deploy_key: "ssh-ed25519 <key here>"
+ bib-ros:
+ name: "Roskilde Bibliotek"
+ description: "Webmaster environment for Roskilde Bibliotek"
+ primary-domain: "www.roskildebib.dk"
+ # The secondary domain will redirect to the primary.
+ secondary-domains: ["roskildebib.dk", "www2.roskildebib.dk"]
+ # A series of sites that shares the same image source may choose to reuse
+ # properties via anchors
+ << : *default-release-image-source
Each platform-site is controlled via a GitHub repository. The repositories are +provisioned via Terraform. The following depicts the authorization and control- +flow in use: +
+The configuration of each repository is reconciled each time a site is created,
+Releases of DPL CMS are deployed to sites via the dpladm
+tool. It consults the sites.yaml
file for the environment and performs any
+needed deployment.
We configure all production backups with a backup schedule that ensure that the +site is backed up at least once a day.
+Backups executed by the k8up operator follows a backup +schedule and then uses Restic to perform the backup +itself. The backups are stored in a Azure Blob Container, see the Environment infrastructure +for a depiction of its place in the architecture.
+The backup schedule and retention is configured via the individual sites
+.lagoon.yml
. The file is re-rendered from a template every time the a site is
+deployed. The templates for the different site types can be found as a part
+of dpladm.
Refer to the lagoon documentation on backups +for more general information.
+Refer to any runbooks relevant to backups for operational instructions +on eg. retrieving a backup.
+ + + + + + + + + + + + + +The following guidelines describe best practices for developing code for the DPL +Platform project. The guidelines should help achieve:
+Contributions to the core DPL Platform project will be reviewed by members of the +Core team. These guidelines should inform contributors about what to expect in +such a review. If a review comment cannot be traced back to one of these +guidelines it indicates that the guidelines should be updated to ensure +transparency.
+The project follows the Drupal Coding Standards +and best practices for all parts of the project: PHP, JavaScript and CSS. This +makes the project recognizable for developers with experience from other Drupal +projects. All developers are expected to make themselves familiar with these +standards.
+The following lists significant areas where the project either intentionally +expands or deviates from the official standards or areas which developers should +be especially aware of.
+terraform fmt
Code comments which describe what an implementation does should only be used +for complex implementations usually consisting of multiple loops, conditional +statements etc.
+Inline code comments should focus on why an unusual implementation has been +implemented the way it is. This may include references to such things as +business requirements, odd system behavior or browser inconsistencies.
+Commit messages in the version control system help all developers understand the +current state of the code base, how it has evolved and the context of each +change. This is especially important for a project which is expected to have a +long lifetime.
+Commit messages must follow these guidelines:
+When creating a pull request the pull request description should not contain any +information that is not already available in the commit messages.
+Developers are encouraged to read How to Write a Git Commit Message +by Chris Beams.
+The project aims to automate compliance checks as much as possible using static +code analysis tools. This should make it easier for developers to check +contributions before submitting them for review and thus make the review process +easier.
+The following tools pay a key part here:
+In general all tools must be able to run locally. This allows developers to get +quick feedback on their work.
+Tools which provide automated fixes are preferred. This reduces the burden of +keeping code compliant for developers.
+Code which is to be exempt from these standards must be marked accordingly in +the codebase - usually through inline comments (markdownlint, +ShellCheck). +This must also include a human readable reasoning. This ensures that deviations +do not affect future analysis and the Core project should always pass through +static analysis.
+If there are discrepancies between the automated checks and the standards +defined here then developers are encouraged to point this out so the automated +checks or these standards can be updated accordingly.
+ + + + + + + + + + + + + +This directory contains the documentation of the DPL Platforms architecture and +overall concepts.
+Documentation of how to use the various sub-components of the project can be +found in READMEs in the respective components directory.
+This directory contains the Infrastructure as Code and scripts that are used +for maintaining the infrastructure-component that each platform environment +consists of. A "platform environment" is an umbrella term for the Azure +infrastructure, the Kubernetes cluster, the Lagoon installation and the set of +GitHub environments that makes up a single DPL Platform installation.
+task
to get a list of targets. Must be run from within
+ an instance of DPL shell unless otherwise
+ noted.The environments
directory contains a subdirectory for each platform
+environment. You generally interact with the files and directories within the
+directory to configure the environment. When a modification has been made, it
+is put in to effect by running the appropiate task
:
configuration
: contains the various configurations the
+ applications that are installed on top of the infrastructure requires. These
+ are used by the support:provision:*
tasks.env_repos
contains the Terraform root-module for provisioning GitHub site-
+ environment repositories. The module is run via the env_repos:provision
task.infrastructure
: contains the Terraform root-module used to provision the basic
+ Azure infrastructure components that the platform requires.The module is run
+ via the infra:provision
task.lagoon
: contains Kubernetes manifests and Helm values-files used for installing
+ the Lagoon Core and Remote that is at the heart of a DPL Platform installation.
+ THe module is run via the lagoon:provision:*
tasks.The remaining guides in this document assumes that you work from an instance +of the DPL shell. See the +DPLSH Runbook for a basic introduction +to how to use dplsh.
+The following describes how to set up a whole new platform environment to host + platform sites.
+The easiest way to set up a new environment is to create a new environments/<name>
+directory and copy the contents of an existing environment replacing any
+references to the previous environment with a new value corresponding to the new
+environment. Take note of the various URLs, and make sure to update the
+Current Platform environments
+documentation.
If this is the very first environment, remember to first initialize the Terraform- +setup, see the terraform README.md.
+When you have prepared the environment directory, launch dplsh
and go through
+the following steps to provision the infrastructure:
# We export the variable to simplify the example, you can also specify it inline.
+export DPLPLAT_ENV=dplplat01
+
+# Provision the Azure resources
+task infra:provision
+
+# Create DNS record
+Create an A record in the administration area of your DNS provider.
+Take the terraform output: "ingress_ip" of the former command and create an entry
+like: "*.[DOMAN_NAME].[TLD]": "[ingress_ip]"
+
+# Provision the support software that the Platform relies on
+task support:provision
+
The previous step has established the raw infrastructure and the Kubernetes support +projects that Lagoon needs to function. You can proceed to follow the official +Lagoon installation procedure.
+The execution of the individual steps of the guide has been somewhat automated, +the following describes how to use the automation, make sure to follow along +in the official documentation to understand the steps and some of the +additional actions you have to take.
+# The following must be carried out from within dplsh, launched as described
+# in the previous step including the definition of DPLPLAT_ENV.
+
+# 1. Provision a lagoon core into the cluster.
+task lagoon:provision:core
+
+# 2. Skip the steps in the documentation that speaks about setting up email, as
+# we currently do not support sending emails.
+
+# 3. Setup ssh-keys for the lagoonadmin user
+# Access the Lagoon UI (consult the platform-environments.md for the url) and
+# log in with lagoonadmin + the admin password that can be extracted from a
+# Kubernetes secret:
+kubectl \
+ -o jsonpath="{.data.KEYCLOAK_LAGOON_ADMIN_PASSWORD}" \
+ -n lagoon-core \
+ get secret lagoon-core-keycloak \
+| base64 --decode
+
+# Then go to settings and add the ssh-keys that should be able to access the
+# lagoon admin user. Consider keeping this list short, and instead add
+# additional users with fewer privileges laster.
+
+# 4. If your ssh-key is passphrase-projected we'll need to setup an ssh-agent
+# instance:
+$ eval $(ssh-agent); ssh-add
+
+# 5. Configure the CLI to verify that access (the cli itself has already been
+# installed in dplsh)
+task lagoon:cli:config
+
+# You can now add additional users, this step is currently skipped.
+
+# (6. Install Harbor.)
+# This step has already been performed as a part of the installation of
+# support software.
+
+# 7. Install a Lagoon Remote into the cluster
+task lagoon:provision:remote
+
+# 8. Register the cluster administered by the Remote with Lagoon Core
+# Notice that you must provide a bearer token via the USER_TOKEN environment-
+# variable. The token can be found in $HOME/.lagoon.yml after a successful
+# "lagoon login"
+USER_TOKEN=<token> task lagoon:add:cluster:
+
The Lagoon core has now been installed, and the remote registered with it.
+Prerequisites:
+az
). See the section on initial
+ Terraform setup for more details on the requirementsFirst create a new administrative github user and create a new organization +with the user. The administrative user should only be used for administering +the organization via terraform and its credentials kept as safe as possible! The +accounts password can be used as a last resort for gaining access to the account +and will not be stored in Key Vault. Thus, make sure to store the password +somewhere safe, eg. in a password-manager or as a physical printout.
+This requires the infrastructure to have been created as we're going to store +credentials into the azure Key Vault.
+# cd into the infrastructure folder and launch a shell
+(host)$ cd infrastructure
+(host)$ dplsh
+
+# Remaining commands are run from within dplsh
+
+# export the platform environment name.
+# export DPLPLAT_ENV=<name>, eg
+$ export DPLPLAT_ENV=dplplat01
+
+# 1. Create a ssh keypair for the user, eg by running
+# ssh-keygen -t ed25519 -C "<comment>" -f dplplatinfra01_id_ed25519
+# eg.
+$ ssh-keygen -t ed25519 -C "dplplatinfra@0120211014073225" -f dplplatinfra01_id_ed25519
+
+# 2. Then access github and add the public-part of the key to the account
+# 3. Add the key to keyvault under the key name "github-infra-admin-ssh-key"
+# eg.
+$ SECRET_KEY=github-infra-admin-ssh-key SECRET_VALUE=$(cat dplplatinfra01_id_ed25519)\
+ task infra:keyvault:secret:set
+
+# 4. Access GitHub again, and generate a Personal Access Token for the account.
+# The token should
+# - be named after the platform environment (eg. dplplat01-terraform-timestamp)
+# - Have a fairly long expiration - do remember to renew it
+# - Have the following permissions: admin:org, delete_repo, read:packages, repo
+# 5. Add the access token to Key Vault under the name "github-infra-admin-pat"
+# eg.
+$ SECRET_KEY=github-infra-admin-pat SECRET_VALUE=githubtokengoeshere task infra:keyvault:secret:set
+
+# Our tooling can now administer the GitHub organization
+
The Personal Access Token we use for impersonating the administrative GitHub +user needs to be recreated periodically:
+# cd into the infrastructure folder and launch a shell
+(host)$ cd infrastructure
+(host)$ dplsh
+
+# Remaining commands are run from within dplsh
+
+# export the platform environment name.
+# export DPLPLAT_ENV=<name>, eg
+$ export DPLPLAT_ENV=dplplat01
+
+# 1. Access GitHub, and generate a Personal Access Token for the account.
+# The token should
+# - be named after the platform environment (eg. dplplat01-terraform)
+# - Have a fairly long expiration - do remember to renew it
+# - Have the following permissions: admin:org, delete_repo, read:packages, repo
+# 2. Add the access token to Key Vault under the name "github-infra-admin-pat"
+# eg.
+$ SECRET_KEY=github-infra-admin-pat SECRET_VALUE=githubtokengoeshere \
+ task infra:keyvault:secret:set
+
+# 3. Delete the previous token
+
This directory contains the configuration and tooling we use to support our +use of terraform.
+The setup keeps a single terraform-state pr. environment. Each state is kept as +separate blobs in a Azure Storage Account.
+ +Access to the storage account is granted via a Storage Account Key which is +kept in a Azure Key Vault in the same resource-group. The key vault, storage account +and the resource-group that contains these resources are the only resources +that are not provisioned via Terraform.
+The following procedure must be carried out before the first environment can be +created.
+Prerequisites:
+Owner
and
+ Key Vault Administrator
roles to on subscription.Use the scripts/bootstrap-tf.sh
for bootstrapping. After the script has been
+run successfully it outputs instructions for how to set up a terraform module
+that uses the newly created storage-account for state-tracking.
As a final step you must grant any administrative users that are to use the setup +permission to read from the created key vault.
+The setup uses an integration with DNSimple to set a domain name when the
+environments ingress ip has been provisioned. To use this integration first
+obtain a api-key for
+the DNSimple account. Then use scripts/add-dnsimple-apikey.sh
to write it to
+the setups Key Vault and finally add the following section to .dplsh.profile
(
+get the subscription id and key vault name from existing export for ARM_ACCESS_KEY
).
export DNSIMPLE_TOKEN=$(az keyvault secret show --subscription "<subscriptionid>"\
+ --name dnsimple-api-key --vault-name <key vault-name> --query value -o tsv)
+export DNSIMPLE_ACCOUNT="<dnsimple-account-id>"
+
A setup is used to manage a set of environments. We currently have a single that +manages all environments.
+The platform environments share a number of general modules, which are then +used via a number of root-modules set up for each environment.
+Consult the general environment documentation +for descriptions on which resources you can expect to find in an environment and +how they are used.
+Consult the environment overview for an overview of +environments.
+The dpl-platform-environment Terraform module +provisions all resources that are required for a single DPL Platform Environment.
+Inspect variables.tf for a description +of the required module-variables.
+Inspect outputs.tf for a list of outputs.
+Inspect the individual module files for documentation of the resources.
+The following diagram depicts (amongst other things) the provisioned resources. +Consult the platform environment documentation +for more details on the role the various resources plays. +
+The dpl-platform-env-repos Terraform module provisions +the GitHub Git repositories that the platform uses to integrate with Lagoon. Each +site hosted on the platform has a registry.
+Inspect variables.tf for a description +of the required module-variables.
+Inspect outputs.tf for a list of outputs.
+Inspect the individual module files for documentation of the resources.
+The following diagram depicts how the module gets its credentials for accessing +GitHub and what it provisions. +
+ + + + + + + + + + + + + +rg-env-dplplat01
dplplat01.dpl.reload.dk
When you need to gain kubectl
access to a platform-environments Kubernetes cluster.
az
cli (from the host). This likely means az login
+ --tenant TENANT_ID
, where the tenant id is that of "DPL Platform". See Azure
+ Portal > Tenant Properties. The logged in user must have permissions to list
+ cluster credentials.docker
cli which is authenticated against the GitHub Container Registry.
+ The access token used must have the read:packages
scope.dpl-platform/infrastructure
dplsh
export DPLPLAT_ENV=dplplat01
task cluster:auth
Your dplsh session should now be authenticated against the cluster.
+ + + + + + + + + + + + + +When you want to add a "generic" site to the platform. By Generic we mean a site
+stored in a repository that that is Prepared for Lagoon
+and contains a .lagoon.yml
at its root.
The current main example of such as site is dpl-cms +which is used to develop the shared DPL install profile.
+az
cli. The logged in user must have full administrative
+ permissions to the platforms azure infrastructure.DPLPLAT_ENV
set to the platform
+ environment name.The following describes a semi-automated version of "Add a Project" in +the official documentation.
+# From within dplsh:
+
+# Set an environment,
+# export DPLPLAT_ENV=<platform environment name>
+# eg.
+$ export DPLPLAT_ENV=dplplat01
+
+# If your ssh-key is passphrase-projected we'll need to setup an ssh-agent
+# instance:
+$ eval $(ssh-agent); ssh-add
+
+# 1. Add a project
+# PROJECT_NAME=<project name> GIT_URL=<url> task lagoon:project:add
+$ PROJECT_NAME=dpl-cms GIT_URL=git@github.com:danskernesdigitalebibliotek/dpl-cms.git\
+ task lagoon:project:add
+
+# 1.b You can also run lagoon add project manually, consult the documentation linked
+# in the beginning of this section for details.
+
+# 2. Deployment key
+# The project is added, and a deployment key is printed. Copy it and configure
+# the GitHub repository. See the official documentation for examples.
+
+# 3. Webhook
+# Configure Github to post events to Lagoons webhook url.
+# The webhook url for the environment will be
+# https://webhookhandler.lagoon.<environment>.dpl.reload.dk
+# eg for the environment dplplat01
+# https://webhookhandler.lagoon.dplplat01.dpl.reload.dk
+#
+# Referer to the official documentation linked above for an example on how to
+# set up webhooks in github.
+
+# 4. Trigger a deployment manually, this will fail as the repository is empty
+# but will serve to prepare Lagoon for future deployments.
+# lagoon deploy branch -p <project-name> -b <branch>
+$ lagoon deploy branch -p dpl-cms -b main
+
When you want to add a new core-test, editor, webmaster or programmer dpl-cms +site to the platform.
+az
cli. The logged in user must have full administrative
+ permissions to the platforms azure infrastructure.DPLPLAT_ENV
set to the platform
+ environment name.The following sections describes how to
+sites.yaml
sites.yaml
specification to provision Github repositories,
+ create Lagoon projects, tie the two together, and deploy all the
+ relevant environments.Create an entry for the site in sites.yaml
.
Specify a unique site key (its key in the map of sites), name and description. +Leave out the deployment-key, it will be added automatically by the +synchronization script.
+Sample entry (beware that this example be out of sync with the environment you +are operating, so make sure to compare it with existing entries from the +environment)
+sites:
+ bib-rb:
+ name: "Roskilde Bibliotek"
+ description: "Roskilde Bibliotek"
+ primary-domain: "www.roskildebib.dk"
+ secondary-domains: ["roskildebib.dk"]
+ dpl-cms-release: "1.2.3"
+ << : *default-release-image-source
+
The last entry merges in a default set of properties for the source of release- +images. If the site is on the "programmer" plan, specify a custom set of +properties like so:
+sites:
+ bib-rb:
+ name: "Roskilde Bibliotek"
+ description: "Roskilde Bibliotek"
+ primary-domain: "www.roskildebib.dk"
+ secondary-domains: ["roskildebib.dk"]
+ dpl-cms-release: "1.2.3"
+ # Github package registry used as an example here, but any registry will
+ # work.
+ releaseImageRepository: ghcr.io/some-github-org
+ releaseImageName: some-image-name
+
Be aware that the referenced images needs to be publicly available as Lagoon +currently only authenticates against ghcr.io.
+Sites on the "webmaster" plan must have the field plan
set to "webmaster"
+as this indicates that an environment for testing custom Drupal modules should
+also be deployed. For example:
sites:
+ bib-rb:
+ name: "Roskilde Bibliotek"
+ description: "Roskilde Bibliotek"
+ primary-domain: "www.roskildebib.dk"
+ secondary-domains: ["roskildebib.dk"]
+ dpl-cms-release: "1.2.3"
+ plan: webmaster
+ << : *default-release-image-source
+
The field plan
defaults to standard
.
Now you are ready to sync the site state.
+You have made a single additive change to the sites.yaml
file. It is
+important to ensure that your branch is otherwise up-to-date with main
,
+as the state you currently have in sites.yaml
is what will be ensured exists
+in the platform.
You may end up undoing changes that other people have done, if you don't have
+their changes to sites.yaml
in your branch.
Prerequisites:
+From within dplsh
run the sites:sync
task to sync the site state in
+sites.yaml, creating your new site
You may be prompted to confirm Terraform plan execution and approve other +critical steps. Read and consider these messages carefully and ensure you are +not needlessly changing other sites.
+The synchronization process:
+plan: "webmaster"
also get
+ a moduletest
branch for testing custom Drupal modules)If no other changes have been made to sites.yaml
, the result is that your new
+site is created and deployed to all relevant environments.
When for example the kubectl
or other dependencies needs updating
IMAGE_URL=dplsh IMAGE_TAG=someTagName
+ task build
DPLSH_IMAGE=dplsh:local ./dplsh
and running
+ what ever commands need to be run to test that the change has the desired effectmain
main
. The tag should look like this: dplsh-x.x.x
.
+ (If in doubt about what version to bump to; read this: https://semver.org/)/infrastructure
directory and
+ run ../tools/dplsh/dplsh.sh --update
.You are done and have the newest version of DPLSH on your machine.
+ + + + + + + + + + + + + +When you want to use the Lagoon API via the CLI. You can connect from the DPL +Shell, or from a local installation of the CLI.
+Using the DPL Shell requires administrative privileges to the infrastructure while +a local CLI may connect using only the ssh-key associated to a Lagoon user.
+This runbook documents both cases, as well as how an administrator can extract the +basic connection details a standard user needs to connect to the Lagoon installation.
+You can skip this step and go to Configure your local lagoon cli) +if your environment is already in Current Platform environments +and you just want to have a local lagoon cli working.
+If it is missing, go through the steps below and update the document if you have +access, or ask someone who has.
+# Launch dplsh.
+$ cd infrastructure
+$ dplsh
+
+# 1. Set an environment,
+# export DPLPLAT_ENV=<platform environment name>
+# eg.
+$ export DPLPLAT_ENV=dplplat01
+
+# 2. Authenticate against AKS, needed by infrastructure and Lagoon tasks
+$ task cluster:auth
+
+# 3. Generate the Lagoon CLI configuration and authenticate
+# The Lagoon CLI is authenticated via ssh-keys. DPLSH will mount your .ssh
+# folder from your homedir, but if your keys are passphrase protected, we need
+# to unlock them.
+$ eval $(ssh-agent); ssh-add
+# Authorize the lagoon cli
+$ task lagoon:cli:config
+
+# List the connection details
+$ lagoon config list
+
Get the details in the angle-brackets from +Current Platform environments:
+$ lagoon config add \
+ --graphql https://<GraphQL endpoint> \
+ --ui https://<Lagoon UI> \
+ --hostname <SSH host> \
+ --ssh-key <SSH Key Path> \
+ --port <SSH port>> \
+ --lagoon <Lagoon name>
+
+# Eg.
+$ lagoon config add \
+ --graphql https://api.lagoon.dplplat01.dpl.reload.dk/graphql \
+ --force \
+ --ui https://ui.lagoon.dplplat01.dpl.reload.dk \
+ --hostname 20.238.147.183 \
+ --port 22 \
+ --lagoon dplplat01
+
Then log in:
+ + + + + + + + + + + + + + +This directory contains our operational runbooks for standard procedures +you may need to carry out while maintaining and operating a DPL Platform +environment.
+Most runbooks has the following layout.
+The runbooks should focus on the "How", and avoid explaining any unnecessary +details.
+ + + + + + + + + + + + + + +When the PR environments are no longer being created, and the
+lagoon-core-broker-<n>
pods are missing or not running, and the container logs
+contain errors like Error while waiting for Mnesia tables:
+{timeout_waiting_for_tables
.
This situation is caused by the RabbitMQ broker not starting correctly.
+You are going to exec into the pod and stop the RabbitMQ application, and then
+start it with the force_boot
+feature, so that it can
+perform its Mnesia sync correctly.
Exec into the pod:
+ +Stop RabbitMQ:
+/ $ rabbitmqctl stop_app
+Stopping rabbit application on node rabbit@lagoon-core-broker-0.lagoon-
+core-broker-headless.lagoon-core.svc.cluster.local ...
+
Start it immediately after using the force_boot
flag:
Then exit the shell and check the container logs for one of the broker pods. It +should start without errors.
+ + + + + + + + + + + + + +When you wish to delete a site and all its data from the platform
+Prerequisites:
+az
) that has administrative access to
+ the cluster running the lagoon installationThe procedure consists of the following steps (numbers does not correspond to +the numbers in the script below).
+Your first step should be to secure any backups you think might be relevant to +archive. Whether this step is necessary depends on the site. Consult the +Retrieve and Restore backups runbook for the +operational steps.
+You are now ready to perform the actual removal of the site.
+# Launch dplsh.
+$ cd infrastructure
+$ dplsh
+
+# You are assumed to be inside dplsh from now on.
+
+# Set an environment,
+# export DPLPLAT_ENV=<platform environment name>
+# eg.
+$ export DPLPLAT_ENV=dplplat01
+
+# Setup access to ssh-keys so that the lagoon cli can authenticate.
+$ eval $(ssh-agent); ssh-add
+
+# Authenticate against lagoon
+$ task lagoon:cli:config
+
+# Delete the project from Lagoon
+# lagoon delete project --project <site machine-name>
+$ lagoon delete project --project core-test1
+
# Authenticate against kubernetes
+$ task cluster:auth
+
+# List the namespaces
+# Identify all the project namespace with the syntax <sitename>-<branchname>
+# eg "core-test1-main" for the main branch for the "core-test1" site.
+$ kubectl get ns
+
+# Delete each site namespace
+# kubectl delete ns <namespace>
+# eg.
+$ kubectl delete ns core-test1-main
+
# 8. Edit sites.yaml, remove the the entry for the site
+$ vi environments/${DPLPLAT_ENV}/sites.yaml
+
When you wish to download an automatic backup made by Lagoon, and optionally +restore it into an existing site.
+Step overview:
+While most steps are different for file and database backups, step 1 is close to +identical for the two guides.
+Be aware that the guide instructs you to copy the backups to /tmp
inside the
+cli pod. Depending on the resources available on the node /tmp
may not have
+enough space in which case you may need to modify the cli
deployment to add
+a temporary volume, or place the backup inside the existing /site/default/files
+folder.
To download the backup access the Lagoon UI and schedule the retrieval of a + backup. To do this,
+To restore the database we must first copy the backup into a running cli-pod +for a site, and then import the database-dump on top of the running site.
+dpl-platform/infrastructure
+ folder from which we will launch dplsh# 1. Authenticate against the cluster.
+$ task cluster:auth
+
+# 2. List the namespaces to identify the sites namespace
+# The namespace will be on the form <sitename>-<branchname>
+# eg "bib-rb-main" for the "main" branch for the "bib-rb" site.
+$ kubectl get ns
+
+# 3. Export the name of the namespace as SITE_NS
+# eg.
+$ export SITE_NS=bib-rb-main
+
+# 4. Copy the *mariadb.sql file to the CLI pod in the sites namespace
+# eg.
+kubectl cp \
+ -n $SITE_NS \
+ *mariadb.sql \
+ $(kubectl -n $SITE_NS get pod -l app.kubernetes.io/instance=cli -o jsonpath="{.items[0].metadata.name}"):/tmp/database-backup.sql
+
+# 5. Have drush inside the CLI-pod import the database and clear out the backup
+kubectl exec \
+ -n $SITE_NS \
+ deployment/cli \
+ -- \
+ bash -c " \
+ echo Verifying file \
+ && test -s /tmp/database-backup.sql \
+ || (echo database-backup.sql is missing or empty && exit 1) \
+ && echo Dropping database \
+ && drush sql-drop -y \
+ && echo Importing backup \
+ && drush sqlc < /tmp/database-backup.sql \
+ && echo Clearing cache \
+ && drush cr \
+ && rm /tmp/database-backup.sql
+ "
+
To restore backed up files into a site we must first copy the backup into a +running cli-pod for a site, and then rsync the files on top top of the running +site.
+dpl-platform/infrastructure
folder from which we
+ will launch dplsh# 1. Authenticate against the cluster.
+$ task cluster:auth
+
+# 2. List the namespaces to identify the sites namespace
+# The namespace will be on the form <sitename>-<branchname>
+# eg "bib-rb-main" for the "main" branch for the "bib-rb" site.
+$ kubectl get ns
+
+# 3. Export the name of the namespace as SITE_NS
+# eg.
+$ export SITE_NS=bib-rb-main
+
+# 4. Copy the files tar-ball into the CLI pod in the sites namespace
+# eg.
+kubectl cp \
+ -n $SITE_NS \
+ backup*-nginx-*.tar.gz \
+ $(kubectl -n $SITE_NS get pod -l app.kubernetes.io/instance=cli -o jsonpath="{.items[0].metadata.name}"):/tmp/files-backup.tar.gz
+
+# 5. Replace the current files with the backup.
+# The following
+# - Verifies the backup exists
+# - Removes the existing sites/default/files
+# - Un-tars the backup into its new location
+# - Fixes permissions and clears the cache
+# - Removes the backup archive
+#
+# These steps can also be performed one by one if you want to.
+kubectl exec \
+ -n $SITE_NS \
+ deployment/cli \
+ -- \
+ bash -c " \
+ echo Verifying file \
+ && test -s /tmp/files-backup.tar.gz \
+ || (echo files-backup.tar.gz is missing or empty && exit 1) \
+ && tar ztf /tmp/files-backup.tar.gz data/nginx &> /dev/null \
+ || (echo could not verify the tar.gz file files-backup.tar && exit 1) \
+ && test -d /app/web/sites/default/files \
+ || (echo Could not find destination /app/web/sites/default/files \
+ && exit 1) \
+ && echo Removing existing sites/default/files \
+ && rm -fr /app/web/sites/default/files \
+ && echo Unpacking backup \
+ && mkdir -p /app/web/sites/default/files \
+ && tar --strip 2 --gzip --extract --file /tmp/files-backup.tar.gz \
+ --directory /app/web/sites/default/files data/nginx \
+ && echo Fixing permissions \
+ && chmod -R 777 /app/web/sites/default/files \
+ && echo Clearing cache \
+ && drush cr \
+ && echo Deleting backup archive \
+ && rm /tmp/files-backup.tar.gz
+ "
+
+# NOTE: In some situations some files in /app/web/sites/default/files might
+# be locked by running processes. In that situations delete all the files you
+# can from /app/web/sites/default/files manually, and then repeat the step
+# above skipping the removal of /app/web/sites/default/files
+
When you want to inspects the logs produced by as specific site
+admin
can
+ password can be fetched from the cluster if it has not been changed viakubectl get secret \
+ --namespace grafana \
+ -o jsonpath="{.data.admin-password}" \
+ grafana \
+| base64 -d
+
Consult the access-kubernetes Run book for instructions +on how to access the cluster.
+<sitename>-<branchname>
.{app="nginx-php-persistent",container="nginx",namespace="rdb-main"}
{namespace="rdb-main"}
When you need to run a Lagoon task
+You need access to the Lagoon UI
+When the cluster is over or underprovisioned and needs to be scaled.
+./infrastructure
with
+ DPLPLAT_ENV
set to the platform environment name.There are multiple approaches to scaling AKS. We run with the auto-scaler enabled +which means that in most cases the thing you want to do is to adjust the max or +minimum configuration for the autoscaler.
+ +Edit the infrastructure configuration for your environment. Eg for dplplat01
+edit infrastructure/environments/dplplat01/infrastructure/main.tf
.
Adjust the ..._count_min
/ ..._count_min
corresponding to the node-pool you
+want to grow/shrink.
Then run infra:provision
to have terraform effect the change.
When you wish to set an environment variable on a site. The variable can be +available for either all sites in the project, or for a specific site in the +project.
+The variables are safe for holding secrets, and as such can be used both for +"normal" configuration values, and secrets such as api-keys.
+The variabel will be available to all containers in the environment can can be
+picked up and parsed eg. in Drupals settings.php
.
DPLPLAT_ENV
set to the platform
+ environment name and ssh-agent running if your ssh-keys are passphrase
+ protected.# From within a dplsh session authorized to use your ssh keys:
+
+# 1. Authenticate against the cluster and lagoon
+$ task cluster:auth
+$ task lagoon:cli:config
+
+# 2. Refresh your Lagoon token.
+$ lagoon login
+
+# 3a. For project-level variables, use the following command, which creates
+# the variable if it does not yet exist, or updates it otherwise:
+$ task lagoon:ensure:environment-variable \
+ PROJECT_NAME=<project name> \
+ VARIABLE_SCOPE=RUNTIME \
+ VARIABLE_NAME=<your variable name> \
+ VARIABLE_VALUE=<your variable value>
+
+# 3b. Or, to similarly ensure the value of an environment-level variable:
+$ task lagoon:ensure:environment-variable \
+ PROJECT_NAME=<project name> \
+ ENVIRONMENT_NAME=<environment name> \
+ VARIABLE_SCOPE=RUNTIME \
+ VARIABLE_NAME=<your variable name> \
+ VARIABLE_VALUE=<your variable value>
+
+# If you get a "Invalid Auth Token" your token has probably expired, generated a
+# new with "lagoon login" and try again.
+
If you want to synchronize state from one environment into another environment.
+For example, you may want to synchronize state to a PR environment to run your +code in a more realistic setup.
+Or you may want to synchronize a main (production) environment state to a +moduletest environment, if requested by the customer.
+If you have access to the dpl-platform setup and can run task in the taskfile +(for platform engineers, not developers of the CMS) you may want to synchronize +site state using the related task (runbook WIP).
+main
to pr-775
you should select pr-775
.Now you are at the tasks UI and can execute tasks for this environment. It +should look something like this:
+ +Now we need to execute 3 tasks to synchronize the whole state and make it +available on visits to the target site:
+ +Run task "Copy database between environments [drush sql-sync]":
+main
to
+ pr-775
select main
in the dropdown.Run task "Copy files between environments [drush rsync]":
+> rsync: [receiver] failed to set times on ...
. As long as these are the
+ only errors in the output, the synchronization succeeded.Run task "Clear Drupal caches [drupal cache-clear]" to clear the caches:
+main
to pr-775
, the pr-775
+ environment will now have the same state as main
.When you need to update the support workload version sheet.
+Run dplsh to extract the current and latest version for all support workloads
+# First authenticate against the cluster
+task cluster:auth
+# Then pull the status
+task ops:get-versions
+
Then access the version status sheet +and update the status from the output.
+ + + + + + + + + + + + + +When you want to upgrade Azure Kubernetes Service to a newer version.
+./infrastructure
with
+ DPLPLAT_ENV
set to the platform environment name.We use Terraform to upgrade AKS. Should you need to do a manual upgrade consult +Azures documentation on upgrading a cluster +and on upgrading node pools. +Be aware in both cases that the Terraform state needs to be brought into sync +via some means, so this is not a recommended approach.
+In order to find out which versions of kubernetes we can upgrade to, we need to +use the following command:
+ +This will output a table of in which the column "Upgrades" lists the available +upgrades for the highest available minor versions.
+A Kubernetes cluster can can at most be upgraded to the nearest minor version, +which means you may be in a situation where you have several versions between +you and the intended version.
+Minor versions can be skipped, and AKS will accept a cluster being upgraded to
+a version that does not specify a patch version. So if you for instance want
+to go from 1.20.9
to 1.22.15
, you can do 1.21
, and then 1.22.15
. When
+upgrading to 1.21
Azure will substitute the version for an the hightest available
+patch version, e.g. 1.21.14
.
You should know know which version(s) you need to upgrade to, and can continue to +the actual upgrade.
+As we will be using Terraform to perform the upgrade we want to make sure it its +state is in sync. Execute the following task and resolve any drift:
+ +Initiate a cluster upgrade. This will upgrade the control plane and node pools +together. See the AKS documentation +for background info on this operation.
+Update the control_plane_version
reference in infrastructure/environments/<environment>/infrastructure/main.tf
+ and run task infra:provision
to apply. You can skip patch-versions, but you
+ can only do one minor-version at the time
Monitor the upgrade as it progresses. The control-plane upgrade is usually
+ performed in under 5 minutes. Monitor via eg. watch -n 5 kubectl version
.
AKS will then automatically upgrade the system, admin and application + node-pools.
+Monitor the upgrade as it progresses. Expect the provisioning of and workload + scheduling to a single node to take about 5-10 minutes. In particular be + aware that the admin node-pool where harbor runs has a tendency to take a + long time as the harbor pvcs are slow to migrate to the new node.
+Monitor via eg.
+ +Go to dplsh's
Dockerfile and update the KUBECTL_VERSION
version to
+ match that of the upgraded AKS version
When there is a need to upgrade Lagoon to a new patch or minor version.
+./infrastructure
with
+ DPLPLAT_ENV
set to the platform environment name.Lagoon-core:
+curl -s https://uselagoon.github.io/lagoon-charts/index.yaml \
+ | yq '.entries.lagoon-core[] | [.name, .appVersion, .version, .created] | @tsv'
+
Lagoon-remote:
+curl -s https://uselagoon.github.io/lagoon-charts/index.yaml \
+ | yq '.entries.lagoon-remote[] | [.name, .appVersion, .version, .created] | @tsv'
+
VERSION_LAGOON_CORE
in
+ infrastructure/environments/<env>/lagoon/lagoon-versions.env
DIFF=1 task lagoon:provision:core
task lagoon:provision:core
VERSION_LAGOON_REMOTE
in
+ infrastructure/environments/<env>/lagoon/lagoon-versions.env
DIFF=1 task lagoon:provision:remote
task lagoon:provision:remote
When you want to upgrade support workloads in the cluster. This includes.
+This document contains general instructions for how to upgrade support workloads, +followed by specific instructions for each workload (linked above).
+dplsh
instance authorized against the cluster. See Using the DPL Shell.Identify the version you want to bump in the environment/configuration
directory
+ eg. for dplplat01 infrastructure/environments/dplplat01/configuration/versions.env.
+ The file contains links to the relevant Artifact Hub pages for the individual
+ projects and can often be used to determine both the latest version, but also
+ details about the chart such as how a specific manifest is used.
+ You can find the latest version of the support workload in the
+ Version status
+ sheet which itself is updated via the procedure described in the
+ Update Upgrade status runbook.
Consult any relevant changelog to determine if the upgrade will require any
+ extra work beside the upgrade itself.
+ To determine which version to look up in the changelog, be aware of the difference
+ between the chart version
+ and the app version.
+ We currently track the chart versions, and not the actual version of the
+ application inside the chart. In order to determine the change in appVersion
+ between chart releases you can do a diff between releases, and keep track of the
+ appVersion
property in the charts Chart.yaml
. Using using grafana as an example:
+ https://github.com/grafana/helm-charts/compare/grafana-6.55.1...grafana-6.56.0.
+ The exact way to do this differs from chart to chart, and is documented in the
+ Specific producedures and tests below.
Carry out any chart-specific preparations described in the charts update-procedure. + This could be upgrading a Custom Resource Definition that the chart does not + upgrade.
+Identify the relevant task in the main Taskfile
+ for upgrading the workload. For example, for cert-manager, the task is called
+ support:provision:cert-manager
and run the task with DIFF=1
, eg
+ DIFF=1 task support:provision:cert-manager
.
If the diff looks good, run the task without DIFF=1
, eg task support:provision:cert-manager
.
Then proceeded to perform the verification test for the relevant workload. See + the following section for known verification tests.
+Finally, it is important to verify that Lagoon deployments still work. Some + breaking changes will not manifest themselves until an environment is + rebuilt, at which point it may subsequently fail. An example is the + disabling of user snippets in the ingress-nginx controller + v1.9.0. To verify + deployments still work, log in to the Lagoon UI and select an environment to + redeploy.
+The project project versions its Helm chart together with the app itself. So, +simply use the chart version in the following checks.
+Cert Manager keeps Release notes +for the individual minor releases of the project. Consult these for every +upgrade past a minor version.
+As both are versioned in the same repository, simply use the following link +for looking up the release notes for a specific patch release, replacing the +example tag with the version you wish to upgrade to.
+https://github.com/cert-manager/cert-manager/releases/tag/v1.11.2
+To compare two reversions, do the same using the following link:
+https://github.com/cert-manager/cert-manager/compare/v1.11.1...v1.11.2
+Commands
+# Diff
+DIFF=1 task support:provision:cert-manager
+
+# Upgrade
+task support:provision:cert-manager
+
Verify that cert-manager itself and webhook pods are all running and healthy.
+ +Insert the chart version in the following link to see the release note.
+https://github.com/grafana/helm-charts/releases/tag/grafana-6.52.9
+The note will most likely be empty. Now diff the chart version with the current +version, again replacing the version with the relevant for your releases.
+https://github.com/grafana/helm-charts/compare/grafana-6.43.3...grafana-6.52.9
+As the repository contains a lot of charts, you will need to do a bit of
+digging. Look for at least charts/grafana/Chart.yaml
which can tell you the
+app version.
With the app-version in hand, you can now look at the release notes for the +grafana app +itself.
+Diff command
+ +Upgrade command
+ +Verify that the Grafana pods are all running and healthy.
+ +Access the Grafana UI and see if you can log in. If you do not have a user
+but have access to read secrets in the grafana
namespace, you can retrive the
+admin password with the following command:
# Password for admin
+UI_NAME=grafana task ui-password
+
+# Hostname for grafana
+kubectl -n grafana get -o jsonpath="{.spec.rules[0].host}" ingress grafana ; echo
+
Harbor has different app and chart versions.
+An overview of the chart versions can be retrived +from Github. +the chart does not have a changelog.
+Link for comparing two chart releases: +https://github.com/goharbor/harbor-helm/compare/v1.10.1...v1.12.0
+Having identified the relevant appVersions, consult the list of +Harbor releases to see a description +of the changes included in the release in question. If this approach fails you +can also use the diff-command described below to determine which image-tags are +going to change and thus determine the version delta.
+Harbor is a quite active project, so it may make sense mostly to pay attention +to minor/major releases and ignore the changes made in patch-releases.
+Harbor documents the general upgrade procedure for non-kubernetes upgrades for minor +versions on their website. +This documentation is of little use to our Kubernetes setup, but it can be useful +to consult the page for minor/major version upgrades to see if there are any +special considerations to be made.
+The Harbor chart repository has upgrade instructions as well. +The instructions asks you to do a snapshot of the database and backup the tls +secret. Snapshotting the database is currently out of scope, but could be a thing +that is considered in the future. The tls secret is handled by cert-manager, and +as such does not need to be backed up.
+With knowledge of the app version, you can now update versions.env
as described
+in the General Procedure section, diff to see the changes
+that are going to be applied, and finally do the actual upgrade.
Diff command
+ +Upgrade command
+ +First verify that pods are coming up
+ +When Harbor seems to be working, you can verify that the UI is working by
+accessing https://harbor.lagoon.dplplat01.dpl.reload.dk/. The password for
+the user admin
can be retrived with the following command:
If everything looks good, you can consider to deploying a site. One way to do this +is to identify an existing site of low importance, and re-deploy it. A re-deploy +will require Lagoon to both fetch and push images. Instructions for how to access +the lagoon UI is out of scope of this document, but can be found in the runbook +for running a lagoon task. In this case you are looking +for the "Deploy" button on the sites "Deployments" tab.
+When working with the ingress-nginx
chart we have at least 3 versions to keep
+track off.
The chart version tracks the version of the chart itself. The charts appVersion
+tracks a controller
application which dynamically configures a bundles nginx
.
+The version of nginx
used is determined configuration-files in the controller.
+Amongst others the
+ingress-nginx.yaml.
Link for diffing two chart versions: +https://github.com/kubernetes/ingress-nginx/compare/helm-chart-4.6.0...helm-chart-4.6.1
+The project keeps a quite good changelog for the chart
+Link for diffing two controller versions: +https://github.com/kubernetes/ingress-nginx/compare/controller-v1.7.1...controller-v1.7.0
+Consult the individual GitHub releases +for descriptions of what has changed in the controller for a given release.
+With knowledge of the app version, you can now update versions.env
as described
+in the General Procedure section, diff to see the changes
+that are going to be applied, and finally do the actual upgrade.
Diff command
+ +Upgrade command
+ +The ingress-controller is very central to the operation of all public accessible +parts of the platform. It's area of resposibillity is on the other hand quite +narrow, so it is easy to verify that it is working as expected.
+First verify that pods are coming up
+ +Then verify that the ingress-controller is able to serve traffic. This can be +done by accessing the UI of one of the apps that are deployed in the platform.
+Access eg. https://ui.lagoon.dplplat01.dpl.reload.dk/.
+We can currently not upgrade to version 2.x of K8up as Lagoon +is not yet ready
+The Loki chart is versioned separatly from Loki. The version of Loki installed
+by the chart is tracked by its appVersion
. So when upgrading, you should always
+look at the diff between both the chart and app version.
The general upgrade procedure will give you the chart version, access the +following link to get the release note for the chart. Remember to insert your +version:
+https://github.com/grafana/loki/releases/tag/helm-loki-5.5.1
+Notice that the Loki helm-chart is maintained in the same repository as Loki +itself. You can find the diff between the chart versions by comparing two +chart release tags.
+https://github.com/grafana/loki/compare/helm-loki-5.5.0...helm-loki-5.5.1
+As the repository contains changes to Loki itself as well, you should seek out
+the file production/helm/loki/Chart.yaml
which contains the appVersion
that
+defines which version of Loki a given chart release installes.
Direct link to the file for a specific tag: +https://github.com/grafana/loki/blob/helm-loki-3.3.1/production/helm/loki/Chart.yaml
+With the app-version in hand, you can now look at the +release notes for Loki +to see what has changed between the two appVersions.
+Last but not least the Loki project maintains a upgrading guide that can be +found here: https://grafana.com/docs/loki/latest/upgrading/
+Diff command
+ +Upgrade command
+ +List pods in the loki
namespace to see if the upgrade has completed
+successfully.
Next verify that Loki is still accessibel from Grafana and collects logs by +logging in to Grafana. Then verify the Loki datasource, and search out some +logs for a site. See the validation steps for Grafana +for instructions on how to access the Grafana UI.
+We can currently not upgrade MinIO without loosing the Azure blob gateway. +see:
+The kube-prometheus-stack
helm chart is quite well maintained and is versioned
+and developed separately from the application itself.
A specific release of the chart can be accessed via the following link:
+https://github.com/prometheus-community/helm-charts/releases/tag/kube-prometheus-stack-45.27.2
+The chart is developed alongside a number of other community driven prometheus- +related charts in https://github.com/prometheus-community/helm-charts.
+This means that the following comparison between two releases of the chart
+will also contain changes to a number of other charts. You will have to look
+for changes in the charts/kube-prometheus-stack/
directory.
The Readme for the chart contains a good Upgrading Chart +section that describes things to be aware of when upgrading between specific +minor and major versions. The same documentation can also be found on +artifact hub.
+Consult the section that matches the version you are upgrading from and to. Be +aware that upgrades past a minor version often requires a CRD update. The +CRDs may have to be applied before you can do the diff and upgrade. Once the +CRDs has been applied you are committed to the upgrade as there is no simple +way to downgrade the CRDs.
+Diff command
+ +Upgrade command
+ +List pods in the prometheus
namespace to see if the upgrade has completed
+successfully. You should expect to see two types of workloads. First a single
+a single promstack-kube-prometheus-operator
pod that runs Prometheus, and then
+a promstack-prometheus-node-exporter
pod for each node in the cluster.
As the Prometheus UI is not directly exposed, the easiest way to verify that +Prometheus is running is to access the Grafana UI and verify that the dashboards +that uses Prometheus are working, or as a minimum that the prometheus datasource +passes validation. See the validation steps for Grafana +for instructions on how to access the Grafana UI.
+The Promtail chart is versioned separatly from Promtail which itself is a part of +Loki. The version of Promtail installed by the chart is tracked by its appVersion. +So when upgrading, you should always look at the diff between both the chart and +app version.
+The general upgrade procedure will give you the chart version, access the +following link to get the release note for the chart. Remember to insert your +version:
+https://github.com/grafana/helm-charts/releases/tag/promtail-6.6.0
+The note will most likely be empty. Now diff the chart version with the current +version, again replacing the version with the relevant for your releases.
+https://github.com/grafana/helm-charts/compare/promtail-6.6.0...promtail-6.6.1
+As the repository contains a lot of charts, you will need to do a bit of
+digging. Look for at least charts/promtail/Chart.yaml
which can tell you the
+app version.
With the app-version in hand, you can now look at the +release notes for Loki +(which promtail is part of). Look for notes in the Promtail sections of the +release notes.
+Diff command
+ +Upgrade command
+ +List pods in the promtail
namespace to see if the upgrade has completed
+successfully.
With the pods running, you can verify that the logs are being collected seeking +out logs via Grafana. See the validation steps for Grafana +for details on how to access the Grafana UI.
+You can also inspect the logs of the individual pods via
+ +And verify that there are no obvious error messages.
+ + + + + + + + + + + + + +The Danish Public Libraries Shell (dplsh) is a container-based shell used by +platform-operators for all cli operations.
+Whenever you perform any administrative actions on the platform. If you want +to know more about the shell itself? Refer to tools/dplsh.
+az
cli. The version should match the version found in
+ FROM mcr.microsoft.com/azure-cli:version
in the dplsh Dockerfile
+ You can choose to authorize the az cli from within dplsh, but your session
+ will only last as long as the shell-session. The use you authorize as must
+ have permission to read the Terraform state from the Terraform setup
+ , and Contributor permissions on the environments
+ resource-group in order to provision infrastructure.dplsh.sh
symlinked into your path as dplsh
, see Launching the Shell
+ (optional, but assumed below)# Launch dplsh.
+$ cd infrastructure
+$ dplsh
+
+# 1. Set an environment,
+# export DPLPLAT_ENV=<platform environment name>
+# eg.
+$ export DPLPLAT_ENV=dplplat01
+
+# 2a. Authenticate against AKS, needed by infrastructure and Lagoon tasks
+$ task cluster:auth
+
+# 2b - if you want to use the Lagoon CLI)
+# The Lagoon CLI is authenticated via ssh-keys. DPLSH will mount your .ssh
+# folder from your homedir, but if your keys are passphrase protected, we need
+# to unlock them.
+$ eval $(ssh-agent); ssh-add
+# Then authorize the lagoon cli
+$ task lagoon:cli:config
+
We are not able to persist and execute a users intentions across page loads. +This is expressed through a number of issues. The main agitator is maintaining +intent whenever a user tries to do anything that requires them to be +authenticated. In these situations they get redirected off the page and after a +successful login they get redirected back to the origin page but without the +intended action fulfilled.
+One example is the AddToChecklist
functionality. Whenever a user wants to add
+a material to their checklist they click the "Tilføj til huskelist" button next
+to the material presentation.
+They then get redirected to Adgangsplatformen.
+After a successful login they get redirected back to the material page but the
+material has not been added to their checklist.
After an intent has been stated we want the intention to be executed even though +a page reload comes in the way.
+We move to implementing what we define as an explicit intention before the +actual action is tried for executing.
+The difference between the two might seem superfluous but the important +distinction to make is that with our current implementation we are not able to +serialize and persist the actions as the application state across page loads. By +defining intent explicitly we are able to serialize it and persist it between +page loads.
+This resolves in the implementation being able to rehydrate the persisted state, +look at the persisted intentions and have the individual application +implementations decide what to do with the intention.
+A mock implementation of the case by case business logic looks as follows.
+const initialStore = {
+ authenticated: false,
+ intent: {
+ status: '',
+ payload: {}
+ }
+}
+
+const fulfillAction = store.authenticated &&
+ (store.intent.status === 'pending' || store.intent.status === 'tried')
+const getRequirements = !store.authenticated && store.intent.status === 'pending'
+const abandonIntention = !store.authenticated && store.intent.status === 'tried'
+
+function AddToChecklist ({ materialId, store }) {
+ useEffect(() => {
+ if (fulfillAction) {
+ // We fire the actual functionality required to add a material to the
+ // checklist and we remove the intention as a result of it being
+ // fulfilled.
+ addToChecklistAction(store, materialId)
+ } else if (getRequirements) {
+ // Before we redirect we set the status to be "tried".
+ redirectToLogin(store)
+ } else if (abandonIntention) {
+ // We abandon the intent so that we won't have an infinite loop of retries
+ // at every page load.
+ abandonAddToChecklistIntention(store)
+ }
+ }, [materialId, store.intent.status])
+ return (
+ <button
+ onClick={() => {
+ // We do not fire the actual logic that is required to add a material to
+ // the checklist. Instead we add the intention of said action to the
+ // store. This is when we would set the status of the intent to pending
+ // and provide the payload.
+ addToChecklistIntention(store, materialId)
+ }}
+ >
+ Tilføj til huskeliste
+ </button>
+ )
+}
+
We utilize session storage to persist the state on the client due to it's short +lived nature and porous features.
+We choose Redux as the framework to implemenent this. Redux is a blessed choice
+in this instance. It has widespread use, an approachable design and is
+well-documented. The best way to go about a current Redux implementation as of
+now is @reduxjs/toolkit
. Redux is a
+sufficiently advanced framework to support other uses of application state and
+even co-locating shared state between applications.
For our persistence concerns we want to use the most commonly used tool for
+that, redux-persist
. There are some
+implementation details
+to take into consideration when integrating the two.
We could persist the intentions in the URL that is delivered back to the client +after a page reload. This would still imply some of the architectural decisions +described in Decision in regards to having an "intent" state, but some of the +different status flags etc. would not be needed since state is virtually shared +across page loads in the url. However this simpler solution cannot handle more +complex situations than what can be described in the URL feasibly.
+React offers useContext()
+for state management as an alternative to Redux.
We prefer Redux as it provides a more complete environment when working with
+state management. There is already a community of established practices and
+libraries which integrate with Redux. One example of this is our need to persist
+actions. When using Redux we can handle this with redux-persist
. With
+useContext()
we would have to roll our own implementation.
Some of the disadvantages of using Redux e.g. the amount of required boilerplate
+code are addressed by using @reduxjs/toolkit
.
Accepted
+It has been decided that app context/settings should be passed from the +server side rendered mount points via data props. +One type of settings is text strings that is defined by +the system/host rendering the mount points. +Since we are going to have quite some levels of nested components +it would be nice to have a way to extract the string +without having to define them all the way down the tree.
+A solution has been made that extracts the props holding the strings
+and puts them in the Redux store under the index: text
at the app entry level.
+That is done with the help of the withText()
High Order Component.
+The solution of having the strings in redux
+enables us to fetch the strings at any point in the tree.
+A hook called: useText()
makes it simple to request a certain string
+inside a given component.
One major alternative would be not doing it and pass down the props. +But that leaves us with text props all the way down the tree +which we would like to avoid. +Some translation libraries has been investigated +but that would in most cases give us a lot of tools and complexity +that is not needed in order to solve the relatively simple task.
+Since we omit the text props down the tree +it leaves us with fewer props and a cleaner component setup. +Although some "magic" has been introduced +with text prop matching and storage in redux, +it is outweighed by the simplicity of the HOC wrapper and useText hook.
+ + + + + + + + + + + + + +As a part of the project, we need to implement a dropdown autosuggest component. +This component needs to be complient with modern website accessibility rules:
+Apart from these accessibility features, the component needs to follow a somewhat +complex design described in +this Figma file. +As visible in the design this autosuggest dropdown doesn't consist only of single +line text items, but also contains suggestions for specific works - utilizing +more complex suggestion items with cover pictures, and release years.
+Our research on the most popular and supported javascript libraries heavily leans
+on this specific article.
+In combination with our needs described above in the context section
, but also
+considering what it would mean to build this component from scratch without any
+libraries, the decision taken favored a library called
+Downshift
.
This library is the second most popular JS library used to handle autsuggest +dropdowns, multiselects, and select dropdowns with a large following and +continuous support. Out of the box, it follows the +ARIA principles, and +handles problems that we would normally have to solve ourselves (e.g. opening and +closing of the dropdown/which item is currently in focus/etc.).
+Another reason why we choose Downshift
over its peer libraries is the amount of
+flexibility that it provides. In our eyes, this is a strong quality of the library
+that allows us to also implement more complex suggestion dropdown items.
In this case, we would have to handle accessibility and state management of the +component with our own custom solutition.
+Accepted.
+Staying informed about how the size of the JavaScript we require browsers to +download to use the project plays an important part in ensuring a performant +solution.
+We currently have no awareness of this in this project and the result surfaces +down the line when the project is integrated with the CMS, which is +tested with Lighthouse.
+To address this we want a solution that will help us monitor the changes to the +size of the bundle we ship for each PR.
+We add integration to RelativeCI to the project. +RelativeCI supports our primary use case and has a number of qualities which we +value:
+Bundlewatch and its ancestor, bundlesize +combine a CLI tool and a web app to provide bundle analysis and feedback on +GitHub as status checks.
+These solutions no longer seem to be actively maintained. There are several
+bugs that would affect
+us and fixes remain unmerged. The project relies on a custom secret instead of
+GITHUB_TOKEN
. This makes supporting our fork-based development workflow
+harder.
This is a GitHub Action which can be used in configurations where statistics
+for two bundles are compared e.g. for the base and head of a pull request. This
+results in a table of changes displayed as a comment in the pull request.
+This is managed using GITHUB_TOKEN.
Accepted.
+The decision of obtaining react-use
as a part of the project originated
+from the problem that arose from having an useEffect hook with
+an object as a dependency.
useEffect
does not support comparison of objects or arrays and we needed
+a method for comparing such natives.
We decided to go for the react-use package +react-use. +The reason is threefold:
+useDeepCompareEffect
We could have used our own implementation of the problem.
+But since it is a common problem we might as well use a community backed solution.
+And react-use
gives us a wealth of other tools.
We can now use useDeepCompareEffect
instead of useEffect
+in cases where we have arrays or objects amomg the dependencies.
+And we can make use of all the other utility hooks that the package provides.
The code base is growing and so does the number of functions and custom hooks.
+While we have a good coverage in our UI tests from Cypress we are lacking +something to tests the inner workings of the applications.
+With unit tests added we can test bits of functionality that is shared +between different areas of the application and make sure that we get the +expected output giving different variations of input.
+We decided to go for Vitest which is an easy to use and +very fast unit testing tool.
+It has more or less the same capabilities as Jest +which is another popular testing framework which is similar.
+Vitest is framework agnostic so in order to make it possible to test hooks
+we found @testing-library/react-hooks
+that works in conjunction with Vitest.
We could have used Jest. But trying that we experienced major problems +with having both Jest and Cypress in the same codebase. +They have colliding test function names and Typescript could not figure it out.
+There is probably a solution but at the same time we got Vitest recommended. +It seemed very fast and just as capable as Jest. And we did not have the +colliding issues of shared function/object names.
+We now have unit test as a part of the mix which brings more stability +and certainty that the individual pieces of the application work.
+ + + + + + + + + + + + + +This project provides a set of React applications which can be placed in a +mounting application to provide self service features for Danish public +libraries. To support variations in the appearance and behavior of these +applications, the applications support configuration. +In practice these are provided through data attributes on the root element of +the application set by the mounting application.
+A configuration value can use one of three different types:
+Also, a configuration value may be required or optional for the application to +function.
+Our use of TypeScript should match our handling of configuration.
+Practice has shown that string values are relatively easy to handle but JSON +values are not. This leads to errors which can be hard to debug. Consequently, +we need to clarify our handling of configuration and especially for JSON values.
+We will use the following rules for handling configuration:
+enabled
boolean property. If no
+ configuration is provided { enabled: false }
is an acceptable value.We could make all configuration optional and introduce suitable handling if no +value is provided. This could introduce default values, errors or workarounds.
+This would make the React application code more complex. We would rather push +this responsibility to the mounting application.
+This approach provides a first step for improving our handling of +configuration. Potential improvements going forward are:
+We needed to handle errors thrown in the apps, both in requests by fetchers but +also exceptions thrown at runtime.
+Errors were already handled in the initial implementation of this project. +An Error Boundary +was already implemented but we were lacking two important features:
+To solve the problem with each app showing its own error, we decided to make use +of React's Portal system. +The host (most likely dpl-cms) that includes the apps tells the apps via a +config data prop what the container id of error wrapper is. Then the Error +boundary system makes sure to use that id when rendering the error.
+Each app is wrapped with an Error Boundary. In the initial implementation
+that meant that if any exception was thrown the Error Boundary would catch
+any exception and showing an error message to the end user.
+Furthermore the error boundary makes sure the errors are being logged to error.log
.
Exceptions can be thrown in the apps at runtime both as a result
+of a failing request to a service or on our side.
+The change made to the error system in this context was to distinguish
+between the request errors.
+Some data for some services are being considered to be essential for the apps to
+work, others are not.
+To make sure that not all fetching errors are being caught we have created a
+queryErrorHandler
in src/components/store.jsx
. The queryErrorHandler
looks
+at the type of error/instance of error that is being thrown
+and decides if the Error Boundary should be used or not.
+At the moment of this writing there are two type of errors: critical and non-critical.
+The critical ones are being caught by the Error Boundary and the non-critical
+are only ending up in the error log and are not blocking the app.
By using the Portal system we have solved the problem about showing multiple +errors instead of a single global one.
+By choosing to distinguish different error types by looking at their instance name +we can decide which fetch errors should be blocking the app and which should not. +In both cases the errors are being logged and we can trace them in our logs.
+None.
+ + + + + + + + + + + + + +We document decisions regarding the architecture of the project using ADRs.
+We follow the format suggested by Michael Nygaard +containing the sections: Title, Context, Decision and Consequences.
+We do not use the Status section. If an ADR is merged into the main branch +it is by default accepted.
+We have added an Alternatives considered section to ensure we document +alternative solutions and the pros and cons for these.
+ + + + + + + + + + + + + +Campaigns are elements that are shown on the search result page above the search +result list. There are three types of campaigns:
+However, they are only shown in case certain criteria are met. We check for this +by contacting the dpl-cms API.
+Dpl-cms is a cms system based on Drupal, where the system administrators can set +up campaigns they want to show to their users. Drupal also allows the cms system +to act as an API endpoint that we then can contact from our apps.
+The cms administrators can specify the content (image, text) and the visibility +criteria for each campaign they create. The visibility criteria is based on +search filter facets. +Does that sound familiar? Yes, we use another API to get that very data +in THIS project - in the search result app. +The facets differ based on the search string the user uses for their search.
+As an example, the dpl-cms admin could wish to show a Harry Potter related +campaign to all the users whose search string retreives search facets which +have "Harry Potter" as one of the most relevant subjects. +Campaigns in dpl-cms can use triggers such as subject, main language, etc.
+An example code snippet for retreiving a campaign from our react apps would then +look something like this:
+ // Creating a state to store the campaign data in.
+ const [campaignData, setCampaignData] = useState<CampaignMatchPOST200 | null>(
+ null
+ );
+
+ // Retreiving facets for the campaign based on the search query and existing
+ // filters.
+ const { facets: campaignFacets } = useGetFacets(q, filters);
+
+ // Using the campaign hook generated by Orval from src/core/dpl-cms/dpl-cms.ts
+ // in order to get the mutate function that lets us retreive campaigns.
+ const { mutate } = useCampaignMatchPOST();
+
+ // Only fire the campaign data call if campaign facets or the mutate function
+ // change their value.
+ useDeepCompareEffect(() => {
+ if (campaignFacets) {
+ mutate(
+ {
+ data: campaignFacets as CampaignMatchPOSTBodyItem[],
+ params: {
+ _format: "json"
+ }
+ },
+ {
+ onSuccess: (campaign) => {
+ setCampaignData(campaign);
+ },
+ onError: () => {
+ // Handle error.
+ }
+ }
+ );
+ }
+ }, [campaignFacets, mutate]);
+
You first need to make sure to have a campaign set up in your locally running +dpl-cms ( +run this repo locally) +Then, in order to see campaigns locally in dpl-react in development mode, you +will most likely need a browser plugin such as Google Chrome's +"Allow CORS: Access-Control-Allow-Origin" +in order to bypass CORS policy for API data calls.
+ + + + + + + + + + + + + +The following guidelines describe best practices for developing code for React +components for the Danish Public Libraries CMS project. The guidelines should +help achieve:
+Contributions to the DPL React project will be reviewed by members of the Core +team. These guidelines should inform contributors about what to expect in such a +review. If a review comment cannot be traced back to one of these guidelines it +indicates that the guidelines should be updated to ensure transparency.
+The project follows the Airbnb JavaScript Style Guide +and Airbnb React/JSX Style Guide. They +have been extended to support TypeScript.
+This choice is based on multiple factors:
+The following lists significant areas where the project either intentionally +expands or deviates from the official standards or areas which developers should +be especially aware of.
+AirBnB's only guideline towards this is that +anonymous arrow function nation is preferred over the normal anonymous function +notation.
+This project sticks to the above guideline as well. If we need to pass a +function as part of a callback or in a promise chain and we on top of that need +to pass some contextual variables that are not passed implicitly from either the +callback or the previous link in the promise chain we want to make use of an +anonymous arrow function as our default.
+This comes with the build in disclaimer that if an anonymous function isn't +required the implementer should heavily consider moving the logic out into its +own named function expression.
+The named function is primarily desired due to it's easier to debug nature in +stacktraces.
+Files provided by components must be placed in the following folders and have +the extensions defined here.
+The project uses Yarn as a package +manager to handle code which is developed outside the project repository. Such +code must not be committed to the Core project repository.
+When specifying third party package versions the project follows these +guidelines:
+Components must reuse existing dependencies in the project before adding new +ones which provide similar functionality. This ensures consistency and avoids +unnecessary increases in the package size of the project.
+The reasoning behind the choice of key dependencies have been documented in +the architecture directory.
+The project uses patches rather than forks to modify third party packages. This +makes maintenance of modified packages easier and avoids a collection of forked +repositories within the project.
+Code comments which describe what an implementation does should only be used +for complex implementations usually consisting of multiple loops, conditional +statements etc.
+Inline code comments should focus on why an unusual implementation has been +implemented the way it is. This may include references to such things as +business requirements, odd system behavior or browser inconsistencies.
+Commit messages in the version control system help all developers understand the +current state of the code base, how it has evolved and the context of each +change. This is especially important for a project which is expected to have a +long lifetime.
+Commit messages must follow these guidelines:
+When creating a pull request the pull request description should not contain any +information that is not already available in the commit messages.
+Developers are encouraged to read How to Write a Git Commit Message by Chris Beams.
+The project aims to automate compliance checks as much as possible using static +code analysis tools. This should make it easier for developers to check +contributions before submitting them for review and thus make the review process +easier.
+The following tools pay a key part here:
+In general all tools must be able to run locally. This allows developers to get +quick feedback on their work.
+Tools which provide automated fixes are preferred. This reduces the burden of +keeping code compliant for developers.
+Code which is to be exempt from these standards must be marked accordingly in +the codebase - usually through inline comments (Eslint, +Stylelint). This must also +include a human readable reasoning. This ensures that deviations do not affect +future analysis and the project should always pass through static analysis.
+If there are discrepancies between the automated checks and the standards +defined here then developers are encouraged to point this out so the automated +checks or these standards can be updated accordingly.
+The frontend tests are executed in +Cypress.
+The test files are placed alongside the application components +and are named following pattern: "*.test.ts". Eg.: material.test.ts.
+After quite a lot of bad experiences with unstable tests +and reading both the official documentation +and articles about the best practices we have ended up with a recommendation of +how to write the tests.
+According to this article +it is important to distinguish between commands and assertions. +Commands are used in the beginning of a statement and yields a chainable element +that can be followed by one or more assertions in the end.
+So first we target an element. +Next we can make one or more assertions on the element.
+We have created some helper commands for targeting an element: +getBySel, getBySelLike and getBySelStartEnd. +They look for elements as advised by the +Selecting Elements +section from the Cypress documentation about best practices.
+Example of a statement:
+// Targeting.
+cy.getBySel("reservation-success-title-text")
+ // Assertion.
+ .should("be.visible")
+ // Another assertion.
+ .and("contain", "Material is available and reserved for you!");
+
We are using Vitest as framework for running unit tests. +By using that we can test functions (and therefore also hooks) and classes.
+They have to be placed in src/tests/unit
.
Or they can also be placed next to the code at the end of a file as described +here.
+export const sum = (...numbers: number[]) =>
+ numbers.reduce((total, number) => total + number, 0);
+
+if (import.meta.vitest) {
+ const { describe, expect, it } = import.meta.vitest;
+
+ describe("sum", () => {
+ it("should sum numbers", () => {
+ expect(sum(1, 2, 3)).toBe(6);
+ });
+ });
+}
+
In that way it helps us to test and mock unexported functions.
+For testing hooks we are using the library
+@testing-library/react-hooks
+and you can also take a look at the text test
+to see how it can be done.
Error handling is something that is done on multiple levels: +Eg.: Form validation, network/fetch error handling, runtime errors. +You could also argue that the fact that the codebase is making use of typescript +and code generation from the various http services (graphql/REST) belongs to +the same idiom of error handling in order to make the applications more robust.
+Error boundary was introduced +in React 16 and makes it possible to implement a "catch all" feature +catching "uncatched" errors and replacing the application with a component +to the users that something went wrong. +It is meant ato be a way of having a safety net and always be able to tell +the end user that something went wrong. +The apps are being wrapped in the error boundary handling which makes it +possible to catch thrown errors at runtime.
+Async operations and therby also fetch
are not being handled out of the box
+by the React Error Boundary. But fortunately react-query
, which is being used
+both by the REST services (Orval) and graphql (Graphql Code Generator), has a
+way of addressing the problem. The QueryClient
can be configured to trigger
+the Error Boundary system
+if an error is thrown.
+So that is what we are doing.
Two different types of error classes have been made in order to handle errors +in the fetchers: http errors and fetcher errors.
+Http errors are the ones originating from http errors +and have a status code attached.
+Fetcher errors are whatever else bad that could apart from http errors. +Eg. JSON parsing gone wrong.
+Both types of errors comes in two variants: "normal" and "critical". The idea is +that only critical errors will trigger an Error Boundary.
+For instance if you look at the
+DBC Gateway fetcher
+it throws a DbcGateWayHttpError
in case of a http error occurs.
+DbcGateWayHttpError
+extends the
+FetcherCriticalHttpError
+which makes sure to trigger the Error Boundary system.
The reason why *.name is set in the errors is to make it clear which error +was thrown. If we don't do that the name of the parent class is used in the +error log. And then it is more difficult to trace where the error originated +from.
+The initial implementation is handling http errors on a coarse level:
+If response.ok
is false then throw an error. If the error is critical
+the error boundary is triggered.
+In future version you could could take more things into consideration
+regarding the error:
The FBS client adapter is autogenerated base the swagger 1.2 json files from +FBS. But Orval requires that we use swagger version 2.0 and the adapter has some +changes in paths and parameters. So some conversion is need at the time of this +writing.
+FBS documentation can be found here.
+All this will hopefully be changed when/or if the adapter comes with its own +specifications.
+A repository dpl-fbs-adapter-tool +tool build around PHP and NodeJS can translate the FBS specifikation into one +usable for Orval client generator. It also filters out all the FBS calls not +need by the DPL project.
+The tool uses go-task to simply the execution of the +command.
+Simple use the installation task.
+ +First convert the swagger 1.2 (located in /fbs/externalapidocs
) to swagger 2.0
+using the api-spec-converter
tool.
Build the swagger specification usable by Orval then run Orval.
+ +The FSB adapter lives at: https://github.com/DBCDK/fbs-cms-adapter
+ + + + + + + + + + + + + +A set of React components and applications providing self-service features for +Danish public libraries.
+Before you can install the project you need to create the file ~/.npmrc
to
+access the GitHub package registry as described using a personal access token.
+The token must be created with the required scopes: repo
and read:packages
If you have npm installed locally this can be achieved by running the following +command and using the token when prompted for password.
+ +.nvmrc
.task dev:start
127.0.0.1 dpl-react.docker
to your /etc/hosts
filetask dev:mocks:start
If you want to enable step debugging you need to:
+Access token must be retrieved from Adgangsplatformen, +a single sign-on solution for public libraries in Denmark, and OpenPlatform, +an API for danish libraries.
+Usage of these systems require a valid client id and secret which must be +obtained from your library partner or directly from DBC, the company responsible +for running Adgangsplatformen and OpenPlatform.
+This project include a client id that matches the storybook setup which can be
+used for development purposes. You can use the /auth
story to sign in to
+Adgangsplatformen for the storybook context.
(Note: if you enter Adgangsplatformen again after signing it, you will get +signed out, and need to log in again. This is not a bug, as you stay logged +in otherwise.)
+To test the apps that is indifferent to wether the user is authenticated or not +it is possible to set a library token via the library component in Storybook. +Workflow:
+For static code analysis we make use of the Airbnb JavaScript Style Guide +and for formatting we make use of Prettier +with the default configuration. The above choices have been influenced by a +multitude of factors:
+This makes future adoption easier for onboarding contributors and support is to +be expected for a long time.
+AirBnB's only guideline towards this is that anonymous arrow function are +preferred over the normal anonymous function notation.
+When you must use an anonymous function (as when passing an inline callback), +use arrow function notation.
+++ +Why? It creates a version of the function that executes in the context of +this, which is usually what you want, and is a more concise syntax.
+Why not? If you have a fairly complicated function, you might move that logic +out into its own named function expression.
+
This project stick to the above guideline as well. If we need to pass a function +as part of a callback or in a promise chain and we on top of that need to pass +some contextual variables that is not passed implicit from either the callback +or the previous link in the promise chain we want to make use of an anonymous +arrow function as our default.
+This comes with the build in disclaimer that if an anonymous function isn't +required the implementer should heavily consider moving the logic out into it's +own named function expression.
+The named function is primarily desired due to it's easier to debug nature in +stacktraces.
+// ./src/apps/my-new-application/my-new-application.jsx
+import React from "react";
+import PropTypes from "prop-types";
+
+export function MyNewApplication({ text }) {
+ return (
+ <h2>{text}</h2>
+ );
+}
+
+MyNewApplication.defaultProps = {
+ text: "The fastest man alive!"
+};
+
+MyNewApplication.propTypes = {
+ text: PropTypes.string
+};
+
+export default MyNewApplication;
+
// ./src/apps/my-new-application/my-new-application.entry.jsx
+import React from "react";
+import PropTypes from "prop-types";
+import MyNewApplication from "./my-new-application";
+
+// The props of an entry is all of the data attributes that were
+// set on the DOM element. See the section on "Naive app mount." for
+// an example.
+export function MyNewApplicationEntry(props) {
+ return <MyNewApplication text='Might be from a server?' />;
+}
+
+export default MyNewApplicationEntry;
+
// ./src/apps/my-new-application/my-new-application.dev.jsx
+import React from "react";
+import MyNewApplicationEntry from "./my-new-application.entry";
+import MyNewApplication from "./my-new-application";
+
+export default { title: "Apps|My new application" };
+
+export function Entry() {
+ // Testing the version that will be shipped.
+ return <MyNewApplicationEntry />;
+}
+
+export function WithoutData() {
+ // Play around with the application itself without server side data.
+ return <MyNewApplication />;
+}
+
Voila! You browser should have opened and a storybook environment is ready +for you to tinker around.
+Most applications will have multiple internal states, so to aid consistency, +it's recommended to:
+ +and use the following states where appropriate:
+initial
: Initial state for applications that require some sort of
+initialization, such as making a request to see if a material can be ordered,
+before rendering the order button. Errors in initialization can go directly to
+the failed state, or add custom states for communication different error
+conditions to the user. Should render either nothing or as a
+skeleton/spinner/message.
ready
: The general "ready state". Applications that doesn't need
+initialization (a generic button for instance) can use ready
as the initial
+state set in the useState
call. This is basically the main waiting state.
processing
: The application is taking some action. For buttons this will be
+the state used when the user has clicked the button and the application is
+waiting for reply from the back end. More advanced applications may use it while
+doing backend requests, if reflecting the processing in the UI is desired.
+Applications using optimistic feedback will render this state the same as the
+finished
state.
failed
: Processing failed. The application renders an error message.
finished
: End state for one-shot actions. Communicates success to the user.
Applications can use additional states if desired, but prefer the above if +appropriate.
+// ./src/apps/my-new-application/my-new-application.jsx
+import React from "react";
+import PropTypes from "prop-types";
+
+export function MyNewApplication({ text }) {
+ return (
+ <h2 className='warm'>{text}</h2>
+ );
+}
+
+MyNewApplication.defaultProps = {
+ text: "The fastest man alive!"
+};
+
+MyNewApplication.propTypes = {
+ text: PropTypes.string
+};
+
+export default MyNewApplication;
+
// ./src/apps/my-new-application/my-new-application.dev.jsx
+import React from "react";
+import MyNewApplicationEntry from "./my-new-application.entry";
+import MyNewApplication from "./my-new-application";
+
+import './my-new-application.scss';
+
+export default { title: "Apps|My new application" };
+
+export function Entry() {
+ // Testing the version that will be shipped.
+ return <MyNewApplicationEntry />;
+}
+
+export function WithoutData() {
+ // Play around with the application itself without server side data.
+ return <MyNewApplication />;
+}
+
Cowabunga! You now got styling in your application
+This project includes styling created by its sister repository - +the design system +as a npm package.
+By default the project should include a release of the design system matching +the current state of the project.
+To update the design system to the latest stable release of the design system +run:
+ +This command installs the latest released version of the package. Whenever a +new version of the design system package is released, it is necessary +to reinstall the package in this project using the same command to get the +newest styling, because yarn adds a specific version number to the package name +in package.json.
+If you need to work with published but unreleased code from a specific branch
+of the design system, you can also use the branch name as the tag for the npm
+package, replacing all special characters with dashes (-
).
Example: To use the latest styling from a branch in the design system called
+feature/availability-label
, run:
If the branch resides in a fork (usually before a pull request is merged) you +can use aliasing +and run:
+yarn config set "@my-fork:registry" "https://npm.pkg.github.com"
+yarn add @danskernesdigitalebibliotek/dpl-design-system@npm:@my-fork/dpl-design-system@feature-availability-label
+
If the branch is updated and you want the latest changes to take effect locally +update the release used:
+ +Note that references to unreleased code should never make it into official +versions of the project.
+If the component is simple enough to be a primitive you would use in multiple +occasions it's called an 'atom'. Such as a button or a link. If it's more +specific that that and to be used across apps we just call it a component. An +example would be some type of media presented alongside a header and some text.
+The process when creating an atom or a component is more or less similar, but +some structural differences might be needed.
+// ./src/components/atoms/my-new-atom/my-new-atom.jsx
+import React from "react";
+import PropTypes from 'prop-types';
+
+/**
+ * A simple button.
+ *
+ * @export
+ * @param {object} props
+ * @returns {ReactNode}
+ */
+export function MyNewAtom({ className, children }) {
+ return <button className={`btn ${className}`}>{children}</button>;
+}
+
+MyNewAtom.propTypes = {
+ className: PropTypes.string,
+ children: PropTypes.node.isRequired
+}
+
+MyNewAtom.defaultProps = {
+ className: ""
+}
+
+export default MyNewAtom;
+
// ./src/apps/my-new-application/my-new-application.jsx
+import React, {Fragment} from "react";
+import PropTypes from "prop-types";
+
+import MyNewAtom from "../../components/atom/my-new-atom/my-new-atom"
+
+export function MyNewApplication({ text }) {
+ return (
+ <Fragment>
+ <h2 className='warm'>{text}</h2>
+ <MyNewAtom className='additional-class' />
+ </Fragment>
+ );
+}
+
+MyNewApplication.defaultProps = {
+ text: "The fastest man alive!"
+};
+
+MyNewApplication.propTypes = {
+ text: PropTypes.string
+};
+
+export default MyNewApplication;
+
Finito! You now know how to share code across applications
+Repeat all of the same steps as with an atom but place it in it's own directory
+inside components
.
Such as ./src/components/my-new-component/my-new-component.jsx
If you use Code we provide some easy to
+use and nice defaults for this project. They are located in .vscode.example
.
+Simply rename the directory from .vscode.example
to .vscode
and you are good
+to go. This overwrites your global user settings for this workspace and suggests
+som extensions you might want.
There are two ways to use the components provided by this project:
+So let's say you wanted to make use of an application within an existing HTML +page such as what might be generated serverside by platforms like Drupal, +WordPress etc.
+For this use case you should download the dist.zip
package from
+the latest release of the project
+and unzip somewhere within the web root of your project. The package contains a
+set of artifacts needed to use one or more applications within an HTML page.
<!DOCTYPE html>
+<html lang="en">
+<head>
+ <meta charset="UTF-8">
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
+ <meta http-equiv="X-UA-Compatible" content="ie=edge">
+ <title>Naive mount</title>
+ <!-- Include CSS files to provide default styling -->
+ <link rel="stylesheet" href="/dist/components.css">
+</head>
+<body>
+ <b>Here be dragons!</b>
+ <!-- Data attributes will be camelCased on the react side aka.
+ props.errorText and props.text -->
+ <div data-dpl-app='add-to-checklist' data-text="Chromatic dragon"
+ data-error-text="Minor mistake"></div>
+ <div data-dpl-app='a-none-existing-app'></div>
+
+ <!-- Load order og scripts is of importance here -->
+ <script src="/dist/runtime.js"></script>
+ <script src="/dist/bundle.js"></script>
+ <script src="/dist/mount.js"></script>
+ <!-- After the necessary scripts you can start loading applications -->
+ <script src="/dist/add-to-checklist.js"></script>
+ <script>
+ // For making successful requests to the different services we need one or
+ // more valid tokens.
+ window.dplReact.setToken("user","XXXXXXXXXXXXXXXXXXXXXX");
+ window.dplReact.setToken("library","YYYYYYYYYYYYYYYYYYYYYY");
+
+ // If this function isn't called no apps will display.
+ // An app will only be displayed if there is a container for it
+ // and a corresponding application loaded.
+ window.dplReact.mount(document);
+ </script>
+</body>
+</html>
+
As a minimum you will need the runtime.js
and bundle.js
. For styling
+of atoms and components you will need to import components.css
.
Each application also has its own JavaScript artifact and it might have a CSS
+artifact as well. Such as add-to-checklist.js
and add-to-checklist.css
.
To mount the application you need an HTML element with the correct data +attribute.
+ +The name of the data attribute should be data-dpl-app
and the value should be
+the name of the application - the value of the appName
parameter assigned in
+the application .mount.js
file.
As stated above, every application needs the corresponding data-dpl-app
+attribute to even be mounted and shown on the page. Additional data attributes
+can be passed if necessary. Examples would be contextual ids etc. Normally these
+would be passed in by the serverside platform e.g. Drupal, Wordpress etc.
<div data-dpl-app='add-to-checklist' data-id="870970-basis:54172613"
+ data-error-text="A mistake was made"></div>
+
The above data-id
would be accessed as props.id
and data-error-text
as
+props.errorText
in the entrypoint of an application.
// ./src/apps/my-new-application/my-new-application.entry.jsx
+import React from "react";
+import PropTypes from "prop-types";
+import MyNewApplication from './my-new-application.jsx';
+
+export function MyNewApplicationEntry({ id }) {
+ return (
+ <MyNewApplication
+ // 870970-basis:54172613
+ id={id}
+ />
+}
+
+export default MyNewApplicationEntry;
+
To fake this in our development environment we need to pass these same data +attributes into our entrypoint.
+// ./src/apps/my-new-application/my-new-application.dev.jsx
+import React from "react";
+import MyNewApplicationEntry from "./my-new-application.entry";
+import MyNewApplication from "./my-new-application";
+
+export default { title: "Apps|My new application" };
+
+export function Entry() {
+ // Testing the version that will be shipped.
+ return <MyNewApplicationEntry id="870970-basis:54172613" />;
+}
+
+export function WithoutData() {
+ // Play around with the application itself without server side data.
+ return <MyNewApplication />;
+}
+
If you want to extend this project - either by introducing new components or +expand the functionality of the existing ones - and your changes can be +implemented in a way that is valuable to users in general, please submit pull +requests.
+Even if that is not the case and you have special needs the infrastructure of +the project should also be helpful to you.
+In such a situation you should fork this project and extend it to your own needs +by implementing new applications. New applications +can reuse various levels of infrastructure provided by the project such as:
+Once the customization is complete the result can be packaged for distribution +by pushing the changes to the forked repository:
+master
branch of the forked repository will
+ automatically update the latest release of the fork.The result can be used in the same ways as the original project.
+ + + + + + + + + + + + + +We use Wiremock to enable request mocking and +reduce dependency on external systems during development.
+The React components generally work with data from external systems. Having +data in a specific state may be necessary in the development of some features +but it can be cumbersome so set up exactly such a state. Many factors can also +reduce the usefulness of this state. Development may change it once a +reservation is deleted. Time affects where a loan is current or overdue.
+With all this in mind it is useful for us to be able to recreate specific +states. We do so using Wiremock. Wiremock +is a system which allows us to mock external APIs. It can be configured or +instrumented to return predefined responses given predefined requests.
+We use Wiremock through Docker based on the following setup:
+.env
docker compose up -d
dory up
.env
file with hostnames (and ports if necessary) for
+ Wiremock Docker containers e.g.:PUBLIZON_BASEURL=http://publizon-mock.docker`
+FBS_BASEURL=http://fbs-mock.docker`
+CMS_BASEURL=http://cms-mock.docker
+
yarn run dev
To set up a mocked response for a new request to FBS do the following:
+.docker/wiremock/fbs/mappings
The following example shows how one might create a mock which returns an error +if the client tries to delete a specific reservation.
+ + + + + + + + + + + + + + +In order to improve both UX and the performance score you can choose to use +skeleton screens in situation where you need to fill the interface +with data from a requests to an external service.
+The skeleton screens are being showed instantly in order to deliver +some content to the end user fast while loading data. +When the data is arriving the skeleton screens are being replaced +with the real data.
+The skeleton screens are rendered with help from the +skeleton-screen-css library. + By using ssc classes + you can easily compose screens + that simulate the look of a "real" rendering with real data.
+In this example we are showing a search result item as a skeleton screen. +The skeleton screen consists of a cover, a headline and two lines of text. +In this case we wanted to maintain the styling of the .card-list-item +wrapper. And show the skeleton screen elements by using ssc classes.
+```tsx +import React from "react";
+const SearchResultListItemSkeleton: React.FC = () => {
+ return (
+
export default SearchResultListItemSkeleton;
+ + + + + + + + + + + + + +This document describes how to use the text functionality +that is partly defined in src/core/utils/text.tsx and in src/core/text.slice.ts.
+The main purpose of the functionality is to be able to access strings defined +at app level inside of sub components +without passing them all the way down via props. +You can read more about the decision +and considerations here.
+In order to use the system the component that has the text props
+needs to be wrapped with the withText
high order function.
+The texts can hereafter be accessed by using the useText
hook.
In this example we have a HelloWorld app with three text props attached:
+import React from "react";
+import { withText } from "../../core/utils/text";
+import HelloWorld from "./hello-world";
+
+export interface HelloWorldEntryProps {
+ titleText: string;
+ introductionText: string;
+ whatText: string;
+}
+
+const HelloWorldEntry: React.FC<HelloWorldEntryProps> = (
+ props: HelloWorldEntryProps
+) => <HelloWorld />;
+
+export default withText(HelloWorldEntry);
+
Now it is possible to access the strings like this:
+import * as React from "react";
+import { Hello } from "../../components/hello/hello";
+import { useText } from "../../core/utils/text";
+
+const HelloWorld: React.FC = () => {
+ const t = useText();
+ return (
+ <article>
+ <h2>{t("titleText")}</h2>
+ <p>{t("introductionText")}</p>
+ <p>
+ <Hello shouldBeEmphasized />
+ </p>
+ </article>
+ );
+};
+export default HelloWorld;
+
It is also possible to use placeholders in the text strings. +They can be handy when you want dynamic values embedded in the text.
+A classic example is the welcome message to the authenticated user.
+Let's say you have a text with the key: welcomeMessageText
.
+The value from the data prop is: Welcome @username, today is @date
.
+You would the need to reference it like this:
import * as React from "react";
+import { useText } from "../../core/utils/text";
+
+const HelloUser: React.FC = () => {
+ const t = useText();
+ const username = getUsername();
+ const currentDate = getCurrentDate();
+
+ const message = t("welcomeMessageText", {
+ placeholders: {
+ "@user": username,
+ "@date": currentDate
+ }
+ });
+
+ return (
+ <div>{message}</div>
+ );
+};
+export default HelloUser;
+
Sometimes you want two versions of a text be shown +depending on if you have one or multiple items being referenced in the text.
+That can be accommodated by using the plural text definition.
+Let's say that an authenticated user has a list of unread messages in an inbox.
+You could have a text key called: inboxStatusText
.
+The value from the data prop is:
{"type":"plural","text":["You have 1 message in the inbox",
+"You have @count messages in the inbox"]}.
+
You would then need to reference it like this:
+import * as React from "react";
+import { useText } from "../../core/utils/text";
+
+const InboxStatus: React.FC = () => {
+ const t = useText();
+ const user = getUser();
+ const inboxMessageCount = getUserInboxMessageCount(user);
+
+ const status = t("inboxStatusText", {
+ count: inboxMessageCount,
+ placeholders: {
+ "@count": inboxMessageCount
+ }
+ });
+
+ return (
+ <div>{status}</div>
+ // If count == 1 the texts will be:
+ // "You have 1 message in the inbox"
+
+ // If count == 5 the texts will be:
+ // "You have 5 messages in the inbox"
+ );
+};
+export default InboxStatus;
+