Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(feat) Allow image to use different frontend setup by modifying spa-build-config.json #777

Closed
wants to merge 1 commit into from

Conversation

ibacher
Copy link
Member

@ibacher ibacher commented Jan 5, 2024

This PR attempts to add a feature to make the frontend image re-usable outside of the RefApp context without interfering with the RefApp image. The main idea here is that the spa-build-config.json file is relatively well-understood mechanism to describe a frontend, so if the user supplies a new spa-build-config.json, we simply process it and replace the frontend with that. This is checked for by using the SHA-512 sum of the spa-build-config.json.

I'm less than happy with the story around how this checksum is persisted; specifically, the "base" checksum is part of the image (to prevent needing to rebuild if you're not modifying the spa-build-config.json), but if the spa-build-config.json is modified, although the checksum file is updated, this may not be persisted in the same way.

I'm also unsure that this method of adding to the image is suitable for most use-cases of modifying the frontend.

@rbuisson
Copy link
Contributor

rbuisson commented Jan 5, 2024

@ibacher IIUC if one provides a custom spa-build-config.json then the Docker image will be rebuilt upon running the container the first time, right? Does this process require internet access? So possibly, does one need to run the frontend once first (during a CI/CD process) to get the new frontend built and commit/publish that new image, or can it be safely done when deploying the application for actual use?

@ibacher
Copy link
Member Author

ibacher commented Jan 5, 2024

@rbuisson So the solution here is actually adding a process to the container with the web server to rebuild things, so there isn't a Docker process involved. Essentially it overwrites the frontend with the new configuration, so it would require a few Docker volumes to make things persist correctly, but there's no additional CI/CD process required.

Does this process require internet access?

Currently, yes, but only when the shell needs to be rebuilt (so when the spa-build-config.json file changes). Otherwise no internet access is required. There is a way to do this without internet access at all, but it involves a spa-build-config.json file that looks something like:

{
  "frontendModules": {
     "@openmrs/esm-login-app": "file:///openmrs/frontend_modules/openmrs-esm-login-app-1.2.3.tgz",
     "@openmrs/esm-primary-navigation-app": "file:///openmrs/frontend_modules/openmrs-esm-primary-navigation-app-1.2.3.tgz"
  }
}

You then have to mount each of those tgz files in the /openmrs/frontend_modules directory. (Fully supporting no network access also means a slight adjustment so that the openmrs CLI tool is installed, but that's trivial to do.

This is pretty feasible, but it might be best to have a simple script that could create that spa-build-config.json and download those tgz files.

So possibly, does one need to run the frontend once first (during a CI/CD process) to get the new frontend built and commit/publish that new image, or can it be safely done when deploying the application for actual use?

I was trying to avoid the need to commit or publish an image. Doing that is much cleaner, e.g.,

# syntax=docker/dockerfile:1.3
FROM --platform=$BUILDPLATFORM node:18-alpine as dev

ARG APP_SHELL_VERSION=next
ARG FRONTEND_VERSION=nightly

RUN mkdir -p /app
WORKDIR /app

COPY spa-build-config.json .

ARG CACHE_BUST
RUN npx --legacy-peer-deps openmrs@${APP_SHELL_VERSION:-next} assemble --manifest --mode config --config spa-build-config.json --target ./spa
RUN npx --legacy-peer-deps openmrs@${APP_SHELL_VERSION:-next} build --build-config spa-build-config.json --target ./spa
RUN if [ ! -f ./spa/index.html ]; then echo 'Build failed. Please check the logs above for details. This may have happened because of an update to a library that OpenMRS depends on.'; exit 1; fi

FROM --platform=$BUILDPLATFORM openmrs/openmrs-reference-application-3-frontend:${FRONTEND_VERSION}

# clear the default files installed by nginx
RUN rm -rf /usr/share/nginx/html/*

COPY --from=dev /app/spa /usr/share/nginx/html

But obviously this then requires a container registry to store it and it needs to be updated at appropriate intervals, etc. (Technically, though, this is my preferred method of customization).

The idea here was for this to be doable on an already built image. There's no need to run the frontend once before hand. It just requires the following volumes:

volumes:
  - path/to/spa-build-config.json:/openmrs/spa-build-config.json
  - <volume>:/openmrs/span-build-config.json.sha512 # i don't actually know if this is possible; this might need to be a bind mount
  - <volume>:/usr/share/nginx/html

@rbuisson
Copy link
Contributor

rbuisson commented Jan 19, 2024

From yesterday's discussion, we might try rather to:

  • make the spa-build-config.json a published artifact (Maven likely)
  • make the assemble command able to use an override file which would specify includes + excludes.

That way, groups wanting to override the Ref App list of ESMs would be able to depend on the published spa-build-config.json and provide their own overriding config, reassemble and build the frontend (likely via CI/CD), then mount that to the O3 community Docker image.

@ibacher does the above correctly reflects what we said?

@ibacher
Copy link
Member Author

ibacher commented Jan 19, 2024

@rbuisson Yes. It turns out this will be slightly more complicated than I originally anticipated, because we were doing some funky, but inconsequential things in the assemble command.

@ibacher ibacher closed this Jan 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants