diff --git a/docker-vars.yaml b/docker-vars.yaml
index f090ebe..770e385 100644
--- a/docker-vars.yaml
+++ b/docker-vars.yaml
@@ -11,7 +11,7 @@ tag: latest
minerva_tag: v7
# Ubuntu based image
-noctua_tag: v5
+noctua_tag: v6
#docker_hub_user: "{{ lookup('env', 'USER')|lower }}"
docker_hub_user: geneontology
diff --git a/production/PRODUCTION_README.md b/production/PRODUCTION_README.md
index c3828fa..2f7898b 100644
--- a/production/PRODUCTION_README.md
+++ b/production/PRODUCTION_README.md
@@ -1,63 +1,349 @@
# Noctua Production Deployment
+This repository enables the deployment of the Noctua stack to AWS. It includes minerva, barista, and noctua and it points to an external amigo instance. The architecture is designed so that sub-components can easily be provisioned, instantiated and deployed. When it is time for the system to be destroyed, all subsystems and artifacts should be removed.
-This repository enables the deployment of the noctua stack to AWS. It includes
-minerva, barista, and noctua and it points to an external amigo instance.
-
-## Deploy a version of the Noctua editor (including minerva, barista, noctua):
- - Important ansible files:
- - vars.yaml
- - docker-vars.yaml
- - s3-vars.yaml
- - ssl-vars.yaml
- - stage.yaml
- - qos-vars.yaml
- - start_services.yaml
-
-## Artifacts Deployed To Staging directory On AWS:
- - blazegraph.jnl
- - Cloned repositories:
- - noctua-form, noctua-landing-page, noctua-models, go-site and noctua-visual-pathway-editor.
- - s3 credentials used to push apache logs to s3 buckets and to download ssl credentials from s3 bucket
- - qos.conf and robots.txt for apache mitigation
- - github OAUTH client id and secret
- - docker-production-compose and various configuration files from template directory
-
-## Requirements
-- Terraform. Tested using v1.1.4
-- Ansible. Tested using version 2.10.7
+## Conventions
+
+Creating and deploying instances on AWS involves the creation of multiple artifacts in Route 53, EC2, S3... In order to easily identify all artifacts associated with a workspace instance, a naming convention is utilized. This allows for easy deletion when a workspace needs to be taken down. Gene Ontology devops follows a namespace pattern for workspaces; it is: `go-workspace-_______`. For noctua, it is `go-workspace-noctua`. Similarly, for a specific instance in the workspace, it is the namespace pattern: `_____-production-YYYY-MM-DD`; e.g.: `graphstore-production-2024-08-26` or `go-api-production-2023-01-30`. For noctua, it will be `noctua-production-YYYY-MM-DD`. For test instances, we use `___-noctua-test-YYYY-MM-DD`, where `___` should be replaced with your initials. The details about the instances will be stored in "folders" under `go-workspace-noctua` S3 bucket.
+
+When an EC2 instance is created for a workspace stack instance, the instance name and workspace names are the same.
+
+Login into AWS and view the S3 bucket information: drill down by selecting 'go-workspace-graphstore'->'env:/'->'production-YYYY-MM-DD'->graphstore->terraform.tfstate. The Terraform state information can be downloaded. Specific EC2 instance information details can be viewed by selecting EC2 then clicking on instances and searching for entries with names containing 'production'. There should be an entry for `graphstore-production-YYYY-MM-DD`. DNS information will be under Route 53. The hosted zones section will have an entry for `geneontology.org` and `geneontology.io`. For production, we use domain `.org` and for testing we use domain `.io`
+
+## Prerequisites
-## Development Environment
+Before starting, ensure the following are available:
-We have a docker based dev environment with all these tools installed. See last section of this README (Appendix I: Development Environment).
+1. AWS credentials (aws\_access\_key\_id and aws\_secret\_access\_key)
+2. SSH keys - Refer to on-boarding instructions
+3. github\_client\_id and github\_client\_secret - Github OAuth
+4. Docker. Docker commands are executed from a terminal window or Windows command prompt
+5. Blazegraph journal file. `production/gen_journal.sh` has instructions on creating one. Or download a test journal from the last release: http://current.geneontology.org/products/blazegraph/blazegraph-production.jnl.gz.
+6. Determine the workspace namespace pattern. If this is for testing purposes, the workspace name should have your initials and the label 'test' as part of its name. For example, for testing, aes-noctua-test-2024-10-02 or for production, noctua-production-2024-10-02. Since, the namespace will be used multiple times, to make things easier, and more imporatantly, to prevent creating actual instances with labels containing `YYYYY-MM-DD`, open up a text editor and enter the namespace.
+
+In addition to noctua, other instances and artifacts will also be created. These should also follow the namespace pattern:
+
+|Item | Artifact | production | test |
+|---- | ------- | ------------------------------------------------ | ------------------------------------------------------|
+|6a | noctua bucket | go-workspace-noctua | go-workspace-noctua |
+|6b | certificate url | s3://go-service-lockbox/geneontology.org.tar.gz | s3://go-service-lockbox/geneontology.io.tar.gz |
+|6c | workspace name | noctua-production-YYYY-MM-DD | ___-noctua-test-YYYY-MM-DD |
+|6d | noctua | noctua-production-YYYY-MM-DD.geneontology.org | ___-noctua-test-YYYY-MM-DD.geneontology.io |
+|6e | barista | barista-production-YYYY-MM-DD.geneontology.org | ___-barista-test-YYYY-MM-DD.geneontology.io |
+|6f | current noctua url | http://noctua.geneontology.org | http://noctua.geneontology.io |
+|6g.1 | golr lookup | https://golr-aux.geneontology.org/solr/ | https://golr-aux.geneontology.io/solr/ |
+|6g.2 | neo lookup | https://noctua-golr.berkeleybop.org/ | https://noctua-golr.berkeleybop.org/ |
+|6h | barista url | https://barista-production-YYYY-MM-DD.geneontology.org | https://___-barista-test-YYYY-MM-DD.geneontology.io |
+
+Both production and testing will execute a command to initialize the workspace:
-The instructions in this document are run from the POV that we're working with this developement environment; i.e.:
```
-docker run --name go-dev -it geneontology/go-devops-base:tools-jammy-0.4.2 /bin/bash
+go-deploy -init --working-directory aws -verbose
+```
+
+The workspace name will be used to instantiate, deploy and destroy. The 3 commands are as follows:
+
+For production
+
+- `go-deploy --workspace noctua-production-YYYY-MM-DD --working-directory aws -verbose --conf config-instance.yaml`
+- `go-deploy --workspace noctua-production-YYYY-MM-DD --working-directory aws -verbose --conf config-stack.yaml`
+- `go-deploy --workspace noctua-production-YYYY-MM-DD --working-directory aws -verbose -destroy`
+
+The instantiate and deploy commands can be tested before instantiation with the 'dry-run' option:
+
+- `go-deploy --workspace noctua-production-YYYY-MM-DD --working-directory aws -verbose -dry-run --conf config-stack.yaml`
+
+The instance name of the server will be `noctua-production-YYYY-MM-DD.geneontology.org`.
+
+A barista instance will be created on `barista-production-YYYY-MM-DD.geneontology.org`.
+
+For testing:
+
+- `go-deploy --workspace ___-noctua-test-YYYY-MM-DD --working-directory aws -verbose --conf config-instance.yaml`
+- `go-deploy --workspace ___-noctua-test-YYYY-MM-DD --working-directory aws -verbose --conf config-stack.yaml`
+- `go-deploy --workspace ___-noctua-test-YYYY-MM-DD --working-directory aws -verbose -destroy`
+
+The deploy command can be tested before instantiation with the 'dry-run' option:
+
+- `go-deploy --workspace ___-noctua-test--YYYY-MM-DD --working-directory aws -verbose -dry-run --conf config-stack.yaml`
+
+The instance name of the server will be `__-noctua-test-YYYY-MM-DD.geneontology.io`
+
+A barista instance will be created on `__-barista-test-YYYY-MM-DD.geneontology.io`
+
+Copy the above commands into a text editor and update the workspace names.
+
+## Workflow
+
+In order to ensure reproducibility, a Docker development environment is created locally with the required tools for deployment. Configuration files are updated. These configuration files are used by Terraform and Ansible to instantiate and configure the instances on AWS. Once an instance is created on AWS, artifacts are deployed to a staging directory on the instance. Followed by deployment.
+
+1. Create a Docker development environment and clone repository from Github
+2. Update credentials for accessing and provisioning resources on AWS
+3. Add an entry in the AWS S3 for storing information about the workspace being created and initialize
+4. Update configuration file to instantiate instance on AWS
+5. Instantiate instance on AWS
+6. Update configuration files to deploy on AWS
+7. Deploy instance
+
+### 1. Create a Docker development environment and clone repository from Github
+
+We have a docker based dev environment with all these tools installed. See last section of this README (Appendix I: Development Environment).
+See Prerequisites 4 for docker.
+The instructions in this document are run from the POV that they are executed within the developement environment; i.e.:
+
+```
+docker run --name go-dev -it geneontology/go-devops-base:tools-jammy-0.4.4 /bin/bash
git clone https://github.com/geneontology/noctua_app_stack.git
+cd noctua_app_stack
+go-deploy -h
+```
+
+### 2. Update credentials for accessing and provisioning resources on AWS
+
+#### Copy the ssh keys from your docker host into the running docker image, in `/tmp`:
+
+These commands may have to be executed from a separate terminal that can run Docker commands.
+See Prerequisites 2 for keys. Copy keys to docker image:
+
+```
+docker cp go-ssh go-dev:/tmp
+docker cp go-ssh.pub go-dev:/tmp
+```
+
+You should now have the following in your image:
+
+```
+ls -latr /tmp
+/tmp/go-ssh
+/tmp/go-ssh.pub
+```
+
+Make sure they have the right permissions to be used:
+
+```
+chmod 600 /tmp/go-ssh*
+```
+
+#### Establish the AWS credential file
+
+Within the running image, copy and modify the AWS credential file to the default location `/tmp/go-aws-credentials`.
+
+```
+cp production/go-aws-credentials.sample /tmp/go-aws-credentials
+```
+
+Add your personal dev keys into the file (Prerequisites 1); update the `aws_access_key_id` and `aws_secret_access_key`:
+
+```
+emacs /tmp/go-aws-credentials
+export AWS_SHARED_CREDENTIALS_FILE=/tmp/go-aws-credentials
+```
+
+### 3. Add an entry in the AWS S3 for storing information about the workspace instance and initialize
+
+Update entry for bucket = `REPLACE_ME_NOCTUA_TERRAFORM_BUCKET` to "go-workspace-noctua" (Item 6a)
+
+```
+cp ./production/backend.tf.sample ./aws/backend.tf
+emacs ./aws/backend.tf
+```
+
+Initialize working directory for workspace and list workspace buckets
+
+```
+go-deploy -init --working-directory aws -verbose
+go-deploy --working-directory aws -list-workspaces -verbose
+```
+
+### 4. Update configuration file to instantiate instance on AWS
+
+Name: REPLACE_ME should be `noctua-production-YYYY-MM-DD` or `___-noctua-test-YYYY-MM-DD` - see Prerequisites 6 (Item 6c) for exact text.
+
+dns\_record\_name: should be `["noctua-production-YYYY-MM-DD.geneontology.org", "barista-production-YYYY-MM-DD.geneontology.org"]` or `["___-noctua-test-YYYY-MM-DD.geneontology.io", "___-barista-test-YYYY-MM-DD.geneontology.io"]` - see Prerequisites 6 (Item 6d, 6e) for exact text.
+
+dns\_zone\_id: should be `Z04640331A23NHVPCC784` for geneontology.org (productio) or `Z1SMAYFNVK75BZ` for geneontology.io (testing).
+
+```
+cp ./production/config-instance.yaml.sample config-instance.yaml
+emacs config-instance.yaml
+```
+
+### 5. Test and Instantiate instance on AWS
+
+Test the deployment with the `dry-run` parameter. From command given below, update `REPLACE_ME_WITH_S3_WORKSPACE_NAME` to something of the form `noctua-production-YYYY-MM-DD` (or "test" version) to match actual workspace name from Prerequisites 6 (Item 6c) and run:
+
+```
+go-deploy --workspace REPLACE_ME_WITH_S3_WORKSPACE_NAME --working-directory aws -verbose -dry-run --conf config-instance.yaml
+```
+
+From command given below, update `REPLACE_ME_WITH_S3_WORKSPACE_NAME` to something of the form `noctua-production-YYYY-MM-DD` (or "test" version) to match actual workspace name from Prerequisites 6 (Item 6c) and run:
+
+```
+go-deploy --workspace REPLACE_ME_WITH_S3_WORKSPACE_NAME --working-directory aws -verbose --conf config-instance.yaml
+```
+
+Note the IP address of the EC2 instance.
+
+Optional, but useful commands.
+
+List workspaces to ensure one has been created:
+
+```
+go-deploy --working-directory aws -list-workspaces -verbose
```
-## Install Python deployment Script (skip if using dev environment)
-Note the script has a -dry-run option. You can always copy the command and execute manually
-Useful to run the ansible playbooks.
+Display the terraform state. Replace as specified in Prerequisites 6 (Item 6c)
```
->pip install go-deploy==0.4.2 # requires python >=3.8
->go-deploy -h
+go-deploy --workspace REPLACE_ME_WITH_S3_WORKSPACE_NAME --working-directory aws -verbose -show
```
-## S3 Terraform Backend
+Display the public ip address of the aws instance. Update command as specified in Prerequisites 6 (Item 6c)
+
+```
+go-deploy --workspace REPLACE_ME_WITH_S3_WORKSPACE_NAME --working-directory aws -verbose -output
+```
+
+Useful Information When Debugging. Replace as specified in Prerequisites 6 (Item 6c)
+
+The deploy command creates a terraform tfvars. These variables override the variables in `aws/main.tf`
+
+```
+cat REPLACE_ME_WITH_S3_WORKSPACE_NAME.tfvars.json
+```
+
+The Deploy command creates a ansible inventory file. Replace as specified in Prerequisites 6 (Item 6c)
+
+```
+cat REPLACE_ME_WITH_S3_WORKSPACE_NAME-inventory.cfg
+```
+
+This can also be validated by logging into AWS and viewing the S3 bucket for 'go-workspace-noctua'. There will be an entry for the workspace instance name. Drill down and view Terraform details. There will be a running instance under EC2 instances. Route 53 hosted instances for 'geneontology.org' or 'geneontology.io' will have 'A' records for both noctua and barista.
+
+Now that an instance has been created, you can login into the instance with the instance name or IP address. Replace host name as specified in Prerequisites 6 (Item 6d):
+
+```
+ssh -i /tmp/go-ssh ubuntu@noctua-production-YYYY-MM-DD.geneontology.org or ssh -i /tmp/go-ssh ubuntu@ ___-noctua-test-YYYY-MM-DD.geneontology.io
+logout
+```
+
+### 6. Update configuration files to deploy on AWS
+
+Modify the stack configuration file as follows:
+
+- `S3_BUCKET: REPLACE_ME_APACHE_LOG_BUCKET` This should be `S3_BUCKET: go-service-logs-noctua-production` for production or `S3_BUCKET: go-service-logs-noctua-test` for test instance
+- `USE_QOS: 0` should be `USE_QOS: 1`
+- `S3_SSL_CERTS_LOCATION: s3://REPLACE_ME_CERT_BUCKET/REPLACE_ME_DOMAIN.tar.gz` should be `S3_SSL_CERTS_LOCATION: s3://go-service-lockbox/geneontology.org.tar.gz` for production or `S3_SSL_CERTS_LOCATION: s3://go-service-lockbox/geneontology.io.tar.gz` for test instance. Replace as specified in Prerequisites 6 (Item 6b)
+
+Refer to Prerequisites 5 - Copy Blazegraph journal file into /tmp directory and update `REPLACE_ME_FILE_PATH` with full complete path. This may have to be done in a separate terminal which can run docker commands. docker cp blazegraph-production.jnl.gz go-dev:/tmp. In docker environment unzip file using gunzip /tmp/blazegraph-production.jnl.gz
+ or retrieve file using something similar to cd /tmp && wget http://skyhook.berkeleybop.org/blazegraph-20230611.jnl
+
+- `BLAZEGRAPH_JOURNAL: REPLACE_ME_FILE_PATH # /tmp/blazegraph-20230611.jnl` should be `BLAZEGRAPH_JOURNAL: /tmp/blazegraph-production.jnl`
+- `noctua_host: REPLACE_ME # noctua.geneontology.org or noctua.geneontology.io For production, update to current production system `noctua_host: http://noctua.geneontology.org` or `noctua_host: http://noctua.geneontology.io` for testing. Replace as specified in Prerequisites 6 (Item 6f)
+
+Refer to Prerequisites 6 - Replace as specified in Prerequisites 6 (Item 6c)- Update year, month and date for current workspace. This is also same as Workflow Step 4, 'dns_record_name'
+
+- `noctua_host_alias: REPLACE_ME` should be `noctua_host_alias: noctua-production-YYYY-MM-DD.geneontology.org` or `noctua_host_alias: ___-noctua-test-YYYY-MM-DD.geneontology.io`
+
+- `noctua_lookup_url: REPLACE_ME # https://noctua-production-2024-10-15.geneontology.org or https://aes-noctua-test-2024-10-15.geneontology.io` For production, should be `noctua_lookup_url: noctua-production-YYYY-MM-DD.geneontology.org` or for testing, `noctua_lookup_url: ___-noctua-test-YYYY-MM-DD.geneontology.io`. Refer to Prerequisites 6 Replace as specified in Prerequisites 6 (Item 6d)
+- `golr_neo_lookup_url: REPLACE_ME # https://golr-aux.geneontology.org/solr/ or https://golr-aux.geneontology.io/solr/` For production, should be `golr_neo_lookup_url: https://noctua-golr.berkeleybop.org/` or for testing, `golr_neo_lookup_url: https://noctua-golr.berkeleybop.org/`. Refer to Prerequisites 6 Replace as specified in Prerequisites 6 (Item 6g)
+
+Refer to Prerequisites 3 and update the github client id and github client secret
+
+- `github_client_id: 'REPLACE_ME' should be `github_client_id: 'github client id'`
+- `github_client_secret: 'REPLACE_ME'` should be `github_client_secret: 'github client secret'`
+
+Refer to Prerequisites 3 and 6 (Item 6e) - Update year, month and date for current workspace for barista instance
+
+- `github_callback_url: REPLACE_ME # barista-production-2024-10-15.geneontology.org/auth/github/callback or https://aes-barista-test-2024-10-15.geneontology.io/auth/github/callback`. For production, update to `github_callback_url: barista-production-YYYY-MM-DD.geneontology.org/auth/github/callback` or for testing, `github_callback_url: ___-barista-test-YYYY-MM-DD.geneontology.io/auth/github/callback`
+
+Refer to Prerequisites 6 Replace as specified in Prerequisites 6 (Item 6g)- Update year, month and date for current workspace for golr instance
+
+- `golr_lookup_url: REPLACE_ME # https://golr-aux.geneontology.org/solr/ or https://golr-aux.geneontology.io/solr/`. For production, should be `golr_lookup_url: https://golr-production-YYYY-MM-DD.geneontology.org/solr` or for testing, `golr_lookup_url: https://___-golr-test-YYYY-MM--DD.geneontology.io/solr`
+
+Refer to Prerequisites 6 - Update year, month and date for current workspace for barista instance
+
+- `barista_lookup_host: REPLACE_ME # barista-production-2024-10-15.geneontology.org or aes-barista-test-2024-10-15.geneontology.io`. For production, should be `barista_lookup_host: barista-production-YYYY-MM-DD.geneontology.org` or for testing, `barista_lookup_host: ___-barista-test-YYYY-MM-DD.geneontology.io`. Refer to Prerequisites 6 Replace as specified in Prerequisites 6 (Item 6e)
+- `barista_lookup_host_alias: REPLACE_ME barista-production-2024-10-15.geneontology.org or am-barista-test-2024-10-15.geneontology.io`. For production, should be `barista_lookup_host_alias: barista-production-YYYY-MM-DD.geneontology.org` or for testing, `barista_lookup_host_alias: ___-barista-test-YYYY-MM-DD.geneontology.io`. Refer to Prerequisites 6 Replace as specified in Prerequisites 6 (Item 6e)
+- `barista_lookup_url: REPLACE_ME # https://barista-production-2024-10-15.geneontology.org or https://am-barista-test-2024-10-15.geneontology.io` For production, should be `barista_lookup_url: https://barista-production-YYYY-MM-DD.geneontology.org` or for testing, `barista_lookup_url: https://___-barista-test-YYYY-MM-DD.geneontology.io`. Refer to Prerequisites 6 Replace as specified in Prerequisites 6 (Item 6h)
+
+```
+cp ./production/config-stack.yaml.sample ./config-stack.yaml
+emacs ./config-stack.yaml
+```
+
+### 7. Deploy
+
+Test the deployment with the `dry-run` parameter. Refer to Prerequisites 6 (Item 6c) and run
+
+```
+go-deploy --workspace REPLACE_ME_WITH_S3_WORKSPACE_NAME --working-directory aws -verbose -dry-run --conf config-stack.yaml
+```
+
+Update workspace name in command below. Refer to Prerequisites 6 (Item 6c) and run
+
+```
+go-deploy --workspace REPLACE_ME_WITH_S3_WORKSPACE_NAME --working-directory aws -verbose --conf config-stack.yaml
+If the system prompts, reply yes:
+The authenticity of host 'xxx.xxx.xxx.xxx (xxx.xxx.xxx.xx)' can't be established.
+ED25519 key fingerprint is SHA256:------------------------.
+This key is not known by any other names
+Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
+```
+
+# Destroy Instance and Delete Workspace.
+
+```sh
+Make sure you are deleting the correct workspace. Refer to Prerequisites 6 (Item 6c) and run
+go-deploy --workspace REPLACE_ME_WITH_S3_WORKSPACE_NAME --working-directory aws -verbose -show
+
+# Destroy. Refer to Prerequisites 6 (Item 6c) and run
+go-deploy --workspace REPLACE_ME_WITH_S3_WORKSPACE_NAME --working-directory aws -verbose -destroy
+```
+
+## Additional information
+
+### Deploy a version of the Noctua editor (including minerva, barista, noctua):
+
+- Important ansible files:
+ - vars.yaml
+ - docker-vars.yaml
+ - s3-vars.yaml
+ - ssl-vars.yaml
+ - stage.yaml
+ - qos-vars.yaml
+ - start_services.yaml
+
+### Artifacts Deployed To Staging directory On AWS:
+
+- blazegraph.jnl
+- Cloned repositories:
+ - noctua-form, noctua-landing-page, noctua-models, go-site and noctua-visual-pathway-editor.
+- S3 credentials used to push apache logs to S3 buckets and to download ssl credentials from S3 bucket
+- qos.conf and robots.txt for apache mitigation
+- github OAuth client id and secret
+- docker-production-compose and various configuration files from template directory
+
+
+### Environment
+
+- Note, these are met via a Docker based environment where these tools are installed
+- Terraform. Tested using v1.1.4
+- Ansible. Tested using version 2.10.7
+
+### S3 Terraform Backend
We use S3 terraform backend to store terraform's state. See production/backend.tf.sample
-## Github OAUTH
-Noctua uses OAUTH for authentication. See templates/github.yaml
+### Github OAUTH
+
+Noctua uses OAUTH for authentication. See templates/github.yaml
-## Prepare Blazegraph journal locally
+### Prepare Blazegraph journal locally
-if you do not have a journal see production/gen_journal.sh.sample to generate one
+Ff you do not have a journal see production/gen_journal.sh.sample to generate one
-## DNS
+### DNS
Note: DNS records are used for noctua and barista. The tool would create them during create phase and destroy them during destroy phase. See `dns_record_name` in the instance config file, ` noctua_host` and `barista_lookup_host` in the stack config file.
@@ -65,101 +351,122 @@ The aliases `noctua_host_alias` and `barista_lookup_host_alias` should be FQDN o
Once the instance has been provisioned and tested, this DNS record would need to be updated manually to point to the public ip address of the vm.
-## Golr/Amigo
-Use the dns name of the external golr instance running alongside amigo. For testing pourposes you can just use aes-test-golr.geneontology if you have deployed the amigo/golr stack or noctua-golr.berkeleybop.org if it is up and running.
+### GOlr/AmiGO
+
+Use the dns name of the external golr instance running alongside amigo. For testing pourposes you can just use aes-test-golr.geneontology if you have deployed the amigo/golr stack or noctua-golr.berkeleybop.org if it is up and running.
+
+### SSH Keys
-## SSH Keys
For testing purposes you can you your own ssh keys. But for production please ask for the go ssh keys.
-## Prepare The AWS Credentials
+### Prepare The AWS Credentials
-The credentials are need by terraform to provision the AWS instance and are used by the provisioned instance to access the s3 bucket used as a certificate store and push aapache logs. One could also copy in from an existing credential set, see Appendix I at the end for more details.
+The credentials are need by terraform to provision the AWS instance and are used by the provisioned instance to access the S3 bucket used as a certificate store and push aapache logs. One could also copy in from an existing credential set, see Appendix I at the end for more details.
-- [ ] Copy and modify the aws credential file to the default location `/tmp/go-aws-credentials`
`cp production/go-aws-credentials.sample /tmp/go-aws-credentials`
+- [ ] Copy and modify the AWS credential file to the default location `/tmp/go-aws-credentials`
`cp production/go-aws-credentials.sample /tmp/go-aws-credentials`
- [ ] You will need to supply an `aws_access_key_id` and `aws_secret_access_key`. These will be marked with `REPLACE_ME`.
-## Prepare And Initialize The S3 Terraform Backend
+### Prepare And Initialize The S3 Terraform Backend
The S3 backend is used to store the terraform state.
Check list:
-- [ ] Assumes you have prepared the aws credentials above.
+
+- [ ] Assumes you have prepared the AWS credentials above.
- [ ] Copy the backend sample file
`cp ./production/backend.tf.sample ./aws/backend.tf`
-- [ ] Make sure you have the correct s3 bucket configured in the bakend file
`cat ./aws/backend.tf `
+- [ ] Make sure you have the correct S3 bucket configured in the bakend file
`cat ./aws/backend.tf `
- [ ] Execute the command set right below in "Command set".
Command set:
```
-# Use the aws cli to make sure you have access to the terraform s3 backend bucket
+# Use the AWS CLI to make sure you have access to the terraform S3 backend bucket
export AWS_SHARED_CREDENTIALS_FILE=/tmp/go-aws-credentials
aws s3 ls s3://REPLACE_ME_WITH_TERRAFORM_BACKEND_BUCKET # S3 bucket
go-deploy -init --working-directory aws -verbose
```
-## Workspace Name
+### Workspace Name
Use these commands to figure out the name of an existing workspace if any. The name should have a pattern `production-YYYY-MM-DD`
Check list:
- [ ] Assumes you have initialized the backend. See above
+
```
go-deploy --working-directory aws -list-workspaces -verbose
```
-## Provision Instance on AWS
+### Provision Instance on AWS
Use the terraform commands shown above to figure out the name of an existing
workspace. If such workspace exists, then you can skip the
-provisionning of the aws instance. Or you can destroy the aws instance
+provisionning of the AWS instance. Or you can destroy the AWS instance
and re-provision if that is the intent.
Check list:
-- [ ] Choose your workspace name. We use the following pattern `production-YYYY-MM-DD`. For example: `production-2023-01-30`.
+
+- [ ] Choose your workspace name. We use the following pattern `noctua-production-YYYY-MM-DD`. For example: `noctua-production-2023-01-30`.
- [ ] Copy `production/config-instance.yaml.sample` to another location and modify using vim or emacs.
- [ ] Verify the location of the ssh keys for your AWS instance in your copy of `config-instance.yaml` under `ssh_keys`.
- [ ] Verify location of the public ssh key in `aws/main.tf`
- [ ] Remember you can use the -dry-run and the -verbose options to test "go-deploy"
- [ ] Execute the command set right below in "Command set".
-- [ ] Note down the ip address of the aws instance that is created. This can also be found in production-YYYY-MM-DD.cfg
+- [ ] Note down the ip address of the AWS instance that is created. This can also be found in noctua-production-YYYY-MM-DD.cfg
Command set:
+
```
cp ./production/config-instance.yaml.sample config-instance.yaml
cat ./config-instance.yaml # Verify contents and modify as needed.
+```
+
+### Deploy command.
+
+```
+go-deploy --workspace noctua-production-YYYY-MM-DD --working-directory aws -verbose --conf config-instance.yaml
+```
+
+### Display the terraform state
-# Deploy command.
-go-deploy --workspace production-YYYY-MM-DD --working-directory aws -verbose --conf config-instance.yaml
+```
+go-deploy --workspace noctua-production-YYYY-MM-DD --working-directory aws -verbose -show
+```
-# Display the terraform state
-go-deploy --workspace production-YYYY-MM-DD --working-directory aws -verbose -show
+### Display the public ip address of the AWS instance.
-# Display the public ip address of the aws instance.
-go-deploy --workspace production-YYYY-MM-DD --working-directory aws -verbose -output
+```
+go-deploy --workspace noctua-production-YYYY-MM-DD --working-directory aws -verbose -output
+```
-#Useful Information When Debugging.
-# The deploy command creates a terraform tfvars. These variables override the variables in `aws/main.tf`
-cat production-YYYY-MM-DD.tfvars.json
+### Useful Information When Debugging.
-# The Deploy command creates a ansible inventory file.
-cat production-YYYY-MM-DD-inventory.cfg
+The deploy command creates a terraform tfvars. These variables override the variables in `aws/main.tf`
+```
+cat noctua-production-YYYY-MM-DD.tfvars.json
```
-## Deploy Stack to AWS
+### The Deploy command creates a ansible inventory file.
+
+```
+cat noctua-production-YYYY-MM-DD-inventory.cfg
+```
+
+### Deploy Stack to AWS
Check list:
- [ ] Check that DNS names for noctua and barista map point to public ip address on AWS Route 53.
- [ ] Location of SSH keys may need to be replaced after copying config-stack.yaml.sample
- [ ] Github credentials will need to be replaced in config-stack.yaml.sample
-- [ ] s3 credentials are placed in a file using format described above
-- [ ] s3 uri if ssl is enabled. Location of ssl certs/key
-- [ ] qos mitigation if qos is enabled
+- [ ] S3 credentials are placed in a file using format described above
+- [ ] S3 uri if ssl is enabled. Location of ssl certs/key
+- [ ] QoS mitigation if QoS is enabled
- [ ] Location of blazegraph.jnl. This assumes you have generated the journal using steps above
- [ ] Use same workspace name as in previous step
- [ ] Remember you can use the -dry-run and the -verbose options
-- [ ] Optional When Testing: change dns names in the config file for noctua, barista, and golr.
+- [ ] Optional When Testing: change dns names in the config file for noctua, barista, and golr.
- [ ] Execute the command set right below in "Command set".
Command set:
@@ -168,32 +475,33 @@ Check list:
cp ./production/config-stack.yaml.sample ./config-stack.yaml
cat ./config-stack.yaml # Verify contents and modify if needed.
export ANSIBLE_HOST_KEY_CHECKING=False
-go-deploy --workspace production-YYYY-MM-DD --working-directory aws -verbose --conf config-stack.yaml
+go-deploy --workspace noctua-production-YYYY-MM-DD --working-directory aws -verbose --conf config-stack.yaml
```
-## Access noctua from a browser
+### Access noctua from a browser
Check list:
+
- [ ] noctua is up and healthy. We use health checks in docker compose file
-- [ ] Use noctua dns name. http://{noctua_host} or https://{noctua_host} if ssl is enabled.
+- [ ] Use noctua dns name. http://{noctua_host} or https://{noctua_host} if ssl is enabled.
-## Debugging
+### Debugging
- ssh to machine. username is ubuntu. Try using dns names to make sure they are fine
- docker-compose -f stage_dir/docker-compose.yaml ps
-- docker-compose -f stage_dir/docker-compose.yaml down # whenever you make any changes
+- docker-compose -f stage_dir/docker-compose.yaml down # whenever you make any changes
- docker-compose -f stage_dir/docker-compose.yaml up -d
-- docker-compose -f stage_dir/docker-compose.yaml logs -f
+- docker-compose -f stage_dir/docker-compose.yaml logs -f
- Use -dry-run and copy and paste the command and execute it manually
-## Destroy Instance and Delete Workspace.
+### Destroy Instance and Delete Workspace.
```sh
Make sure you are deleting the correct workspace.
-go-deploy --workspace production-YYYY-MM-DD --working-directory aws -verbose -show
+go-deploy --workspace noctua-production-YYYY-MM-DD --working-directory aws -verbose -show
# Destroy.
-go-deploy --workspace production-YYYY-MM-DD --working-directory aws -verbose -destroy
+go-deploy --workspace noctua-production-YYYY-MM-DD --working-directory aws -verbose -destroy
```
## Appendix I: Development Environment
@@ -237,3 +545,31 @@ chown root /tmp/go-*
chgrp root /tmp/go-*
```
+## Appendix II: Integrating changes to the workbench
+
+The versions of minerva and noctua for the application stack are based on what is specified in docker-vars.yaml. If there are updates that can be released to production, then a build has to be created with the changes and pushed to the Docker Hub Container Image Library. The version number for minerva can be updated via minerva_tag and noctua version can be updated via noctua_tag.
+
+### Build Noctua
+
+```
+#Login into Dockerhub with username, password and build noctua version V6
+docker login
+git checkout https://github.com/geneontology/noctua.git
+docker build -f docker/Dockerfile.noctua -t 'geneontology/noctua:v6' -t 'geneontology/noctua:latest' noctua
+
+#Ensure the build works
+docker run --name mv6 -it geneontology/noctua:v6 /bin/bash
+exit
+
+#Push to Dockerhub
+docker push geneontology/noctua:v6
+docker push geneontology/noctua:latest
+```
+
+### Build Minerva
+
+git checkout https://github.com/geneontology/minerva.git
+docker build -f docker/Dockerfile.minerva -t 'geneontology/minerva:v7' -t 'geneontology/minerva:latest' minerva
+docker run --name mv7 -it geneontology/minerva:v7 /bin/bash
+docker push geneontology/minerva:v7
+docker push geneontology/minerva:latest
diff --git a/production/config-stack.yaml.sample b/production/config-stack.yaml.sample
index bfea1db..3f42921 100644
--- a/production/config-stack.yaml.sample
+++ b/production/config-stack.yaml.sample
@@ -25,27 +25,28 @@ stack:
# download or create journal locally and specify full path here.
# cd /tmp && wget http://skyhook.berkeleybop.org/blazegraph-20230611.jnl
+ # see production/gen_journal.sh.sample to generate one
BLAZEGRAPH_JOURNAL: REPLACE_ME_FILE_PATH # /tmp/blazegraph-20230611.jnl
- # HTTP OR HTTPS
- noctua_host: REPLACE_ME # aes-test-noctua.geneontology.io
- noctua_host_alias: REPLACE_ME
- noctua_lookup_url: REPLACE_ME # https://golr-aux.geneontology.io/solr/
- golr_neo_lookup_url: REPLACE_ME # https://aes-test-noctua.geneontology.io
+ # HTTPS
+ noctua_host: REPLACE_ME # noctua.geneontology.org or noctua.geneontology.io
+ noctua_host_alias: REPLACE_ME # noctua-production-2024-10-15.geneontology.org or aes-noctua-test-2024-10-15.geneontology.io
+ noctua_lookup_url: REPLACE_ME # https://noctua-production-2024-10-15.geneontology.org or https://aes-noctua-test-2024-10-15.geneontology.io
+ golr_neo_lookup_url: REPLACE_ME # https://golr-aux.geneontology.org/solr/ or https://golr-aux.geneontology.io/solr/
- # HTTP OR HTTPS
+ # HTTPS
github_client_id: 'REPLACE_ME'
github_client_secret: 'REPLACE_ME'
- github_callback_url: REPLACE_ME # https://aes-test-barista.geneontology.io/auth/github/callback
+ github_callback_url: REPLACE_ME # barista-production-2024-10-15.geneontology.org/auth/github/callback or https://aes-barista-test-2024-10-15.geneontology.io/auth/github/callback
- # HTTP OR HTTPS
- golr_lookup_url: REPLACE_ME # https://aes-test-golr.geneontology.io/solr
+ # HTTPS
+ golr_lookup_url: REPLACE_ME # https://golr-aux.geneontology.org/solr/ or https://golr-aux.geneontology.io/solr/
# HTTP OR HTTPS
- barista_lookup_host: REPLACE_ME # aes-test-barista.geneontology.io
- barista_lookup_host_alias: REPLACE_ME
- barista_lookup_url: REPLACE_ME # https://aes-test-barista.geneontology.io
+ barista_lookup_host: REPLACE_ME # barista-production-2024-10-15.geneontology.org or aes-barista-test-2024-10-15.geneontology.io
+ barista_lookup_host_alias: REPLACE_ME barista-production-2024-10-15.geneontology.org or am-barista-test-2024-10-15.geneontology.io
+ barista_lookup_url: REPLACE_ME # https://barista-production-2024-10-15.geneontology.org or https://am-barista-test-2024-10-15.geneontology.io
USE_CLOUDFLARE: 0
scripts: [ stage.yaml, start_services.yaml]
diff --git a/production/gen_journal.sh.sample b/production/gen_journal.sh.sample
index 478b50d..e7c5fad 100755
--- a/production/gen_journal.sh.sample
+++ b/production/gen_journal.sh.sample
@@ -1,9 +1,17 @@
-# Modify java maximum heap based on your machine
+# Modify java maximum heap based on your machine and also update sdir such that it is same as path where git clone is executed. Note, sdir has to be modified in export and docker run commands
+# If necessary, modify minerva version i.e. geneontology/minerva:v7
export CMD="java -Xmx4G -jar minerva-cli.jar --import-owl-models -j /sdir/blazegraph.jnl -f /sdir/noctua-models/models"
git clone https://github.com/geneontology/noctua-models.git
-docker pull geneontology/minerva:v2
-echo docker run --rm -v $PWD:/sdir -t geneontology/minerva:v2 $CMD
-docker run --rm -v $PWD:/sdir -t geneontology/minerva:v2 $CMD
+# If above command takes long to run or is unsuccessful, retrieve a shallow copy
+# For testing puroses, only a subset of models in directory 'noctua-models/models' are required. Therefore, only about a ~1000 .ttl files are necessary in directory 'noctua-models/models'
+#git clone https://github.com/geneontology/noctua-models.git --depth 1
+docker pull geneontology/minerva:v7
+
+# Modify sdir
+echo docker run --rm -v $PWD:/sdir -t geneontology/minerva:v7 $CMD
+
+# Modify sdir
+docker run --rm -v $PWD:/sdir -t geneontology/minerva:v7 $CMD
exit 1
diff --git a/templates/httpd-vhosts-prod-barista.conf b/templates/httpd-vhosts-prod-barista.conf
index 76f2b14..d01f466 100644
--- a/templates/httpd-vhosts-prod-barista.conf
+++ b/templates/httpd-vhosts-prod-barista.conf
@@ -3,10 +3,9 @@
ServerName {{ barista_lookup_host }}
ServerAlias {{ barista_lookup_host_alias }}
- Alias /robots.txt /var/www/html/robots.txt
- RewriteEngine On
- RewriteRule ^/robots.txt /robots.txt
-
+ #Do not use permanent for testing Redirect permanent / https://{{barista_lookup_host_alias}}/
+ Redirect / https://{{barista_lookup_host_alias}}/
+
## Get aggressive with badly behaving bots.
RewriteCond %{HTTP_USER_AGENT} ^.*Adsbot.*$ [OR]
RewriteCond %{HTTP_USER_AGENT} ^.*AhrefsBot.*$ [OR]
@@ -21,14 +20,9 @@
RewriteCond %{HTTP_USER_AGENT} ^.*semrush.*$ [OR]
RewriteCond %{HTTP_USER_AGENT} ^.*WhatWeb.*$ [OR]
RewriteCond %{HTTP_USER_AGENT} ^.*WhatWeb.*$
- RewriteRule . - [R=403,L]
+ RewriteRule . - [R=403,L]
+
+
- ErrorLog "/var/log/apache2/barista-error.log"
- CustomLog "/var/log/apache2/barista-access.log" common
- ## Proxy.
- ProxyPreserveHost On
- ProxyRequests Off
- ProxyPass / http://barista:3400/
- ProxyPassReverse / http://barista:3400/
diff --git a/templates/httpd-vhosts-prod-noctua.conf b/templates/httpd-vhosts-prod-noctua.conf
index 201c2d4..43f72bb 100644
--- a/templates/httpd-vhosts-prod-noctua.conf
+++ b/templates/httpd-vhosts-prod-noctua.conf
@@ -3,11 +3,8 @@
ServerName {{ noctua_host }}
ServerAlias {{ noctua_host_alias }}
- ## Setup robots.txt.
- DocumentRoot /var/www/html
- Alias /robots.txt /var/www/html/robots.txt
- RewriteEngine On
- RewriteRule ^/robots.txt /robots.txt
+ #Do not use permanent for testing Redirect permanent / https://{{noctua_host_alias}}/
+ Redirect / https://{{noctua_host_alias}}/
## Get aggressive with badly behaving bots.
RewriteCond %{HTTP_USER_AGENT} ^.*Adsbot.*$ [OR]
@@ -23,15 +20,6 @@
RewriteCond %{HTTP_USER_AGENT} ^.*semrush.*$ [OR]
RewriteCond %{HTTP_USER_AGENT} ^.*WhatWeb.*$ [OR]
RewriteCond %{HTTP_USER_AGENT} ^.*WhatWeb.*$
- RewriteRule . - [R=403,L]
+ RewriteRule . - [R=403,L]
-
- ErrorLog "/var/log/apache2/noctua-error.log"
- CustomLog "/var/log/apache2/noctua-access.log" common
-
- ## Proxy.
- ProxyPreserveHost On
- ProxyRequests Off
- ProxyPass / http://noctua:8910/
- ProxyPassReverse / http://noctua:8910/