Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add static asset build and s3-deploy deploy make task #1744

Merged
merged 7 commits into from
Nov 28, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,2 +1,3 @@
.env
prod.env
build/
12 changes: 11 additions & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -72,6 +72,14 @@ start-FULL-REBUILD: echo_vars stop rm-ALL ## Remove and restart all Docker conta
docker compose ${COMPOSE_FILE_ARGS} --env-file ${ENV_FILE} down
docker compose ${COMPOSE_FILE_ARGS} --env-file ${ENV_FILE} up --build

build-web-assets: ## Build and extract static web assets for cloud deployment
docker compose ${COMPOSE_FILE_ARGS} --env-file ${ENV_FILE} create --build --force-recreate file-server
$(MAKE) extract-web-assets

extract-web-assets: ## Extract static web assets from file-server for cloud deployment
/bin/rm -rf build
docker compose ${COMPOSE_FILE_ARGS} --env-file ${ENV_FILE} cp file-server:/app/build/ build

e2e-install: e2e/node_modules ## Install Cypress E2E testing tools
$(E2E_RUN) npm install

Expand All @@ -91,7 +99,9 @@ rbs: start-rebuild
@true

.PHONY: help pull start stop rm-containers rm-volumes rm-images rm-ALL hash build-no-cache start-rebuild \
start-recreate restart-FULL-REBUILD e2e-install e2e-run e2e-run-all e2e-run-some
start-recreate restart-FULL-REBUILD e2e-install e2e-prepare e2e-run-minimal e2e-run-standalone e2e-run-secret \
e2e-run-subset e2e-run-all build-web-assets extract-web-assets upload-web-assets


help:
@echo 'Usage: make <command>'
Expand Down
109 changes: 40 additions & 69 deletions bin/deploy-static-assets.clj
Original file line number Diff line number Diff line change
@@ -1,23 +1,37 @@
#!/usr/bin/env bb

;; To use this script, you will need to install babashka: https://github.com/babashka/babashka#installation
;; If you have homebrew/linuxbrew installed, you can use:
;; This script is a utility for deploying static web assets to AWS S3, as an alternative to the `file-server`
;; container.
;;
;; brew install borkdude/brew/babashka
;; To use this script, you will need to [install babashka](https://github.com/babashka/babashka#installation)
;; and the AWS CLI. If you have homebrew/linuxbrew installed, you can accomplish both with:
;;
;; Before deploying, use `make PROD start-rebuild` to get the system running, then from another shell, run
;; brew install borkdude/brew/babashka awscli
;;
;; docker cp polis-prod-file-server-1:/app/build build
;; Before deploying, use
;;
;; to copy over all of the static assets from the container to local directory.
;; Next you will have to make sure that you have the AWS environment variables set.
;; make build-web-assets
;;
;; Then you should be able to run:
;; to build and extract the web assets into the `build` directory.
;;
;; ./bin/deploy-static-assets.clj --bucket preprod.pol.is --dist-path build
;; You may choose to run with either with `PROD` settings specified in your `prod.env` file
;; (`make PROD build-web-assets`), or with custom settings explicitly for deploying web assets
;; (e.g. a `prod-web-assets.env`) file with `make ENV_FILE=prod-web-assets.env extract-web-assets`).
;;
;; This deploys to the `preprod.pol.is` bucket.
;; To deploy to the production `pol.is` bucket, use instead `--bucket pol.is`.
;; Next you will have to make sure that you have the AWS environment variables set to authenticate the AWS
;; CLI. There are quite a few ways to do this, and we recommend following AWS documentation for this. Possible
;; routes include using `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` environment variables (not
;; recommended, since non-privileged processes can read these environment variables), setting these values in
;; your `~/.aws/config` file under a profile (either `default` or a custom profile if you set the
;; `AWS_PROFILE` environment variable), with a combination of the `~/.aws/config` file and the
;; `~/.aws/credentials` file, or with `aws sso login` if you are using AWS SSO (a.k.a. IAM Identity Center,
;; which is the recommended pathway by AWS for organizational human user authentication). This script just
;; calls out to the `aws` cli, so if it properly authenticated/authorized and functioning, this script should work.
;;
;; Once all this is set up, you should be able to run (e.g.):
;;
;; ./bin/deploy-static-assets.clj --bucket my-aws-s3-bucket-name --dist-path build


(require '[babashka.pods :as pods]
'[babashka.deps :as deps]
Expand All @@ -29,39 +43,7 @@
'[clojure.string :as string]
'[cheshire.core :as json])

(pods/load-pod 'org.babashka/aws "0.0.6")
(deps/add-deps '{:deps {honeysql/honeysql {:mvn/version "1.0.444"}}})

(require '[pod.babashka.aws :as aws]
'[pod.babashka.aws.credentials :as aws-creds])

;; Should move this to arg parsing if and when available
(def region (or (System/getenv "AWS_REGION")
"us-east-1"))

(def creds-provider
(aws-creds/basic-credentials-provider
{:access-key-id (System/getenv "AWS_ACCESS_KEY")
:secret-access-key (System/getenv "AWS_SECRET_KEY")}))

(def s3-client
"The s3 client for this process"
(aws/client {:api :s3 :region region :credentials-provider creds-provider}))

;; list available s3 actions
;(map first (aws/ops s3-client))

;; docs for specific action
;(aws/doc s3-client :ListObjects)
;(aws/doc s3-client :PutObject)

;; basic listing contents example
;(aws/invoke s3-client {:op :ListObjects :request {:Bucket "pol.is"}})
;(->> (:Contents (aws/invoke s3-client {:op :ListObjects :request {:Bucket "preprod.pol.is"}}))
;(map :Key)
;(filter #(re-matches #".*\.headersJson" %)))
;(->> (:Contents (aws/invoke s3-client {:op :ListObjects :request {:Bucket "preprod.pol.is"}}))
;(filter #(re-matches #".*/fonts/.*" (:Key %))))

(defn file-extension [file]
(keyword (second (re-find #"\.([a-zA-Z0-9]+)$" (str file)))))
Expand All @@ -83,9 +65,6 @@
(def cache-buster-seconds 31536000);
(def cache-buster (format "no-transform,public,max-age=%s,s-maxage=%s" cache-buster-seconds cache-buster-seconds))

;(json/decode (slurp (io/file "build/embed.html.headersJson"))
;(comp keyword #(clojure.string/replace % #"-" "")))

(defn headers-json-data
[file]
(let [data (json/decode (slurp file)
Expand All @@ -105,7 +84,8 @@
[bucket base-path file]
(let [headers-file (io/file (str file ".headersJson"))]
(merge
{:Bucket bucket
{:file file
:Bucket bucket
:Body (io/input-stream (io/file file))
:Key (relative-path base-path file)
:ACL "public-read"}
Expand All @@ -124,32 +104,21 @@
(not (re-matches #".*\.headersJson" (str %))))) ;; omit, headersJson, since processed separately
(map (partial file-upload-request bucket path))))

; Inspect how this parses to AWS S3 requests
;(pp/pprint
;(mapcat (partial spec-requests "preprod.pol.is") deploy-specs))

;; Check content type mappings
;(doseq [request
;(mapcat (partial spec-requests "preprod.pol.is") deploy-specs)]
;(println (:Key request) (:ContentType request)))

;; test individual request
;(spec-requests "preprod.pol.is" (nth deploy-specs 5))



;; synchonous execution

(defn process-deploy
"Execute AWS S3 request, and return result"
[request]
[{:as request :keys [Bucket Key ACL ContentType CacheControl ContentEncoding file]}]
(println "Processing request:" request)
[request (aws/invoke s3-client {:op :PutObject :request request})])

;(doseq [request (mapcat (partial spec-requests "preprod.pol.is") deploy-specs)]
;(println "processing request for" (:Key request))
;(let [response (aws/invoke s3-client {:op :PutObject :request request})]
;(println response))))
[request
(process/sh "aws" "s3" "cp"
;"--metadata" (json/encode (dissoc request :file :Bucket :Body :Key))
;"--acl" ACL
"--content-type" ContentType
"--content-encoding" ContentEncoding
"--metadata-directive" "REPLACE"
(str file)
(str "s3://" Bucket "/" Key))])


;; process the aws requests asynchronously with parallelism 12
Expand All @@ -166,14 +135,16 @@
(defn responses [bucket path]
(let [requests (upload-requests bucket path)
output-chan (async/chan concurrent-requests)]
;; pipeline pushes the request objects through the (map process-deploy) transducer in parallel, and
;; collects results in the output chan
(async/pipeline-blocking concurrent-requests
output-chan
(map process-deploy)
(async/to-chan requests))
(async/<!! (async/into [] output-chan))))

(defn errors [responses]
(remove (comp :ETag second)
(remove (comp (partial = 0) :exit second) ; remove 0 exit status (0 is success)
responses))

(defn -main [& {:as opts-map :strs [--bucket --dist-path]}]
Expand Down
2 changes: 1 addition & 1 deletion docs/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ If you are deploying to a custom domain (not `pol.is`) then you need to update b
- **`DOMAIN_WHITELIST_ITEM_01`** - **`08`** up to 8 possible additional whitelisted domains for client applications to make API requests from. Typical setups that use the same URL for the API service as for the public-facing web sites do not need to configure these.
- **`EMBED_SERVICE_HOSTNAME`** should match **`API_DEV_HOSTNAME`** in production, or **`API_DEV_HOSTNAME`** in development. Embedded conversations make API requests to this host.
- **`SERVICE_URL`** used by client-report to make API calls. Only necessary if client-report is hosted separately from the API service. Can be left blank.
- **`STATIC_FILES_HOST`** Used by the API service to fetch static assets (the compiled client applications) from a static file server. Within the docker compose setup this is `file-server`, but could be an external hostname, such as a CDN.
- **`STATIC_FILES_HOST`** Used by the API service to fetch static assets (the compiled client applications) from a static file server. Within the docker compose setup this is `file-server`, but could be an external hostname, such as a CDN or S3 bucket.

### Third Party API Credentials

Expand Down
Loading