diff --git a/docs/developer/develop-vscode.md b/docs/developer/develop-vscode.md index c74b5887d..a8c58ac3b 100644 --- a/docs/developer/develop-vscode.md +++ b/docs/developer/develop-vscode.md @@ -4,7 +4,7 @@ This procedure will get you up and running with a Visual Studio Code environment for Turbinia development. The provided configuration files will create a development container containing all dependencies, pylint and yapf correctly setup and launch configurations for both client, server and workers. With this setup it is possible to initiate full Turbinia debug sessions including breakpoints, watches and stepping. -You can set Visual Studio Code up to run a local stack (using redis and celery) or use a hybrid GCP stack (using pubsub, datastore and cloud functions). We advice you to run a local stack if you don't need to debug or develop Turbinia GCP functionality. +You can set Visual Studio Code up to run a local stack (using redis and celery). ## Before you start @@ -39,8 +39,6 @@ _Note_: If vscode does not ask you to reopen in a container you need to verify y _Note_: The instructions contain shell commands to execute, please execute those commands in the vscode terminal (which runs in the development container) and not in a terminal on your host! -Continue with Step 4 for a local Turbinia setup or Step 5 for a GCP hybrid setup. - #### Step 4 - Local Turbinia setup The local turbinia setup will use redis and celery. Let's create the configuration file for this setup. @@ -51,53 +49,11 @@ _Note_: This command needs to be executed in the vscode terminal! $ sed -f ./docker/vscode/redis-config.sed ./turbinia/config/turbinia_config_tmpl.py > ~/.turbiniarc ``` -Let's verify the installation in Step 6. - -#### Step 5 - GCP hybrid Turbinia setup - -Follow the ‘GCP Setup’ section [here](../user/install-gcp-pubsub.md) and setup Cloud Functions, a GCE bucket, Datastore and PubSub. - -- Create a pubsub topic, eg ‘turbinia-dev’ -- Create a GCE storage bucket with a unique name - -Create the Turbinia hybrid configuration file. - -_Note_: This command needs to be executed in the vscode terminal! - -``` -$ sed -f ./docker/vscode/psq-config.sed ./turbinia/config/turbinia_config_tmpl.py > ~/.turbiniarc -``` - -Edit the configuration file `~/.turbiniarc` and set below variables according to the GCP project you are using. Make sure all values are between quotes! - -``` -TURBINIA_PROJECT = '[your_gcp_project_name]' -TURBINIA_REGION = '[your_preferred_region]' eg 'us-central1' -TURBINIA_ZONE = '[your_preferred_zone]' eg 'us-central1-f' -PUBSUB_TOPIC = '[your_gcp_pubsub_topic_name]' eg 'turbinia-dev' -BUCKET_NAME = '[your_gcp_bucket_name]' -``` - -Setup authentication for the GCP project. +Let's verify the installation in Step 5. -_Note_: These commands need to be executed in the vscode terminal! - -``` -$ gcloud auth login -$ gcloud auth application-default login -``` +#### Step 5 - Turbinia installation verification -Deploy the Google Cloud Functions - -_Note_: This command needs to be executed in the vscode terminal! - -``` -$ PYTHONPATH=. python3 tools/gcf_init/deploy_gcf.py -``` - -#### Step 6 - Turbinia installation verification - -Let's verify that the GCP hybrid setup is working before we start developing and debugging. We are going to start a server and worker in separate vscode terminals and create a Turbinia request in a third. Open up 3 vscode terminals and execute below commands. +Let's verify that the local setup is working before we start developing and debugging. We are going to start a server and worker in separate vscode terminals and create a Turbinia request in a third. Open up 3 vscode terminals and execute below commands. _Note_: These commands need to be executed in the vscode terminal! @@ -114,12 +70,6 @@ For a local setup $ python3 turbinia/turbiniactl.py celeryworker ``` -For a GCP hybrid setup - -``` -$ python3 turbinia/turbiniactl.py psqworker -``` - Terminal 3 - Fetch and process some evidence ``` @@ -129,7 +79,7 @@ $ python3 turbinia/turbiniactl.py compresseddirectory -l /evidence/history.tgz $ python3 turbinia/turbiniactl.py -a status -r [request_id] ``` -This should process the evidence and show output in each terminal for server and worker. Results will be stored in `/evidence` and in the GCS bucket. +This should process the evidence and show output in each terminal for server and worker. Results will be stored in `/evidence`. #### Step 8 - Debugging example diff --git a/docs/user/api-server.md b/docs/user/api-server.md index 789e1d50f..eae37a5c8 100644 --- a/docs/user/api-server.md +++ b/docs/user/api-server.md @@ -4,17 +4,17 @@ Turbinia's API server provides a RESTful interface to Turbinia's functionality. It allows users to create and manage logical jobs, which are used to schedule forensic processing tasks. The API server also provides a way for users to monitor the status of their jobs and view the results of their processing tasks. ## Getting started -The following sections describe how to get the Turbinia API server up and running. Please note that The API server is only compatible with Turbinia deployments that use Redis as a datastore and Celery workers. If your deployment uses GCP PubSub and/or GCP PSQ workers you will not be able to use the API server. GCP PubSub/PSQ dependencies will be deprecated in the near future so it is recommended to redeploy Turbinia and use Redis and Celery. +The following sections describe how to get the Turbinia API server up and running. Please note that The API server is only compatible with Turbinia deployments that use Redis as a datastore and Celery workers. If your deployment uses the old GCP PubSub and/or GCP PSQ workers you will not be able to use the API server. It is recommended to redeploy Turbinia and use Redis and Celery. ### Installation -To use the Turbinia API server you will need to deploy Turbinia in your environment with a configuration that uses Redis and Celery instead of GCP PubSub and PSQ. +To use the Turbinia API server you will need to deploy Turbinia in your environment with a configuration that uses Redis and Celery. -Please follow the instructions for deploying a [Turbinia GKE Celery cluster](install-gke-celery.md) or [local stack using Celery](turbinia-local-stack.md) +Please follow the instructions for deploying a [Turbinia GKE Celery cluster](https://github.com/google/osdfir-infrastructure/tree/main/charts/turbinia) or [local stack using Celery](turbinia-local-stack.md) Note that the Turbinia API server requires access to the Turbinia output directory (```OUTPUT_DIR```) ### Configuration and UI -If you plan on making the Turbinia API Server and Web UI externally accessible (e.g. internet access), follow the instructions for [external access and authentication](install-gke-external.md) +If you plan on making the Turbinia API Server and Web UI externally accessible (e.g. internet access), follow the instructions for [external access and authentication](https://github.com/google/osdfir-infrastructure/tree/main/charts/turbinia) ### Usage You may access the API server at ```http://:```, or via https if you deployed Turbinia for external access using a domain and HTTPS certificate. @@ -24,6 +24,6 @@ Because the Turbinia API Server is built using the FastAPI framework, it provide We also provide a [command-line tool](https://github.com/google/turbinia/tree/master/turbinia/api/cli) and a [Python library](https://github.com/google/turbinia/tree/master/turbinia/api/client) to interact with the API server. ### Authentication -Turbinia API Server uses OAuth2-proxy to provide OpenID Connect and OAuth2 authentication support. If you deployed Turbinia using GCP and GKE cluster instructions, follow the guide for [external access and authentication](install-gke-external.md) to complete the authentication configuration. +Turbinia API Server uses OAuth2-proxy to provide OpenID Connect and OAuth2 authentication support. If you deployed Turbinia using GCP and GKE cluster instructions, follow the guide for [external access and authentication](https://github.com/google/osdfir-infrastructure/tree/main/charts/turbinia) to complete the authentication configuration. For Turbinia deployments using the [local stack](turbinia-local-stack.md), or a non-Google identity provider, make sure to edit the ```oauth2_proxy.cfg``` configuration file in ```docker/oauth2_proxy``` with the appropriate identity provider information such as ```client_id``` and ```client_secret``` prior to deploying the Docker containers in the local stack. If your deployment will use an identity provider other than Google, you will also need to change the ```provider``` and related settings. For more information and how to configure OAuth2-proxy for different providers, refer to the [OAuth2-Proxy Documentation](https://oauth2-proxy.github.io/oauth2-proxy/docs/configuration/oauth_provider). \ No newline at end of file diff --git a/docs/user/gke-sre.md b/docs/user/gke-sre.md deleted file mode 100644 index 501924690..000000000 --- a/docs/user/gke-sre.md +++ /dev/null @@ -1,383 +0,0 @@ -# **GKE SRE Guide to Turbinia** - -## Introduction - -This document covers the Turbinia SRE guide for Google Cloud Kubernetes. It will -cover topics to manage the Turbinia infrastructure in the Kubernetes environment -and includes the Prometheus/Grafana monitoring stack. - -## Debugging Task Failures - -At times, Turbinia may report back some failures after processing some Evidence. -Given that Turbinia Jobs and Tasks can be created to run third party tools, -Turbinia can not anticipate all failures that may occur, especially with a third -party tool. Here are some debugging steps you can take to further investigate -these failures. - -- Refer to the [debugging documentation](debugging.md) - for steps on grabbing the status of a Request or Task that has failed. -- If the debugging documentation doesn’t provide enough information to the Task - failure, you may also grab and review stderr logs for the Task that has failed. - - stderr logs can be found in the path specified in the Turbinia `OUTPUT_DIR`. - The directory containing all Task output can be identified in the directory format - `--`. - - Turbinia logs can be found in the path specified at `LOG_DIR`. -- Determine whether the failure has occurred before by checking the Error - Reporting console, if `STACKDRIVER_TRACEBACK` was enabled in the Turbinia config. - All Turbinia exceptions will be logged to the console and can be helpful to - check to see if the Error has been seen before and whether or not it has been - acknowledged/tracked in an issue. -- Determine whether the Task failure is being tracked in a Github issue. If the - failure occurred from a third party tool, then we’ll likely NOT have tracked - this since the issue would have to be raised with the third party tool rather - than Turbinia. -- If the issue seems to be related to the third party tool, file a bug to the - associated repo else file one for the Turbinia team. - -### Turbinia Controller - -In addition to the troubleshooting steps above, you may also consider deploying -the Turbinia controller to the GKE cluster for further troubleshooting. The -controller pod has the Turbinia client installed and is configured to use your -Turbinia GKE instance. You may create Turbinia requests from this pod to process -GCP disks within your project as well as have access to all Turbinia logs and output -stored in the Filestore path. To deploy the Turbinia controller, please take the following steps. - -If using Turbinia Pubsub - -``` -./k8s/tools/deploy-pubsub-gke.sh --deploy-controller -``` - -If using Turbinia Celery/Redis - -``` -./k8s/tools/deploy-celery-gke.sh --deploy-controller -``` - -Please note that the commands above will also deploy the rest of the infrastructure so -if you'd like to deploy the pod to an existing infrastructure, you can run -`kubectl create -f k8s/common/turbinia-controller.yaml`. Please ensure that you -have the correct `turbiniavolume` filestore path prior to deploying. - -## GKE Infrastructure - -### Preparation - -The GKE stack is managed with the [update-gke-infra.sh](https://github.com/google/turbinia/raw/master/k8s/tools/update-gke-infra.sh) management script. This script can be run from any workstation or cloud shell. -Please follow the steps below on a workstation or cloud shell prior to running -the script. - -- Clone the Turbinia repo or the update-gke-infra.sh script directly. -- Install [Google Cloud SDK](https://cloud.google.com/sdk/docs/install), which - installs the gcloud and kubectl cli tool. -- Authenticate with the Turbinia cloud project: - - `gcloud auth application-default login` -- Connect to the cluster - - `gcloud container clusters get-credentials [cluster] --zone [zone] --project [project]` - -## Updating the Turbinia infrastructure - -The following section will cover how to make updates to the Turbinia -configuration file, environment variables, and updating the Turbinia Docker -image. - -### Update the Turbinia configuration - -The Turbinia configuration is base64 encoded as a ConfigMap value named -`TURBINIA_CONF`. This is then read by the Turbinia Server and Workers as an -environment variable. Any changes made to the configuration do NOT require a -Server/Worker restart if using the `update-gke-infra.sh` as the script will -automatically restart the pods through a `kubectl rollout` - -Please ensure you have the latest version of the configuration file before -making any changes. The new configuration can be loaded into the Turbinia stack -through the following command - -- `$ ./update-gke-infra.sh -c update-config -f [path-to-cleartext-config]` -- Note: the script will automatically encode the config file passed in as base64 - -### Update an environment variable - -The Turbinia stack sets some configuration parameters through Deployment files, -one for the Turbinia Server and one for Workers. In order to update an -environment variable, run the following command. - -- `$ ./update-gke-infra.sh -c update-config -k [env-variable-name] -v [env-variable-value]` - -### Updating the Turbinia Docker image - -Turbinia is currently built as a Docker image which runs in a containerd -environment. - -#### Updating to latest - -When a new version of Turbinia is released, a production Docker image will be -built for both the Server and Worker and tagged with the `latest` tag or a tag -specifying the [release date](https://github.com/google/turbinia/releases). -It is recommended to specify the latest release date tag (e.g. `20220701`) instead -of the `latest` tag to prevent Worker pods from picking up a newer version than the rest of the -environment as they get removed and re-created through auto scaling. Additionaly, -an older release date can be specified if you'd like to rollback to a different -version of Turbinia. These updates can be done through the commands below. - -- `$ ./update-gke-infra.sh -c change-image -t [tag]` - -## Scaling Turbinia - -### Scaling Turbinia Worker Pods - -Turbinia GKE automatically scales the number of Worker pods based on processing -demand determined by the CPU utilization average across all pods. As demand -increases, the number of pods scale up until the CPU utilization is below a -determined threshold. Once processing is complete, the number of Worker pods -will scale down. The current autoscaling policy is configured in the -[turbinia-autoscale-cpu.yaml](https://github.com/google/turbinia/blob/master/k8s/common/turbinia-autoscale-cpu.yaml) -file. - -There is a default setting of 3 Worker pods to run at any given time with the -ability to scale up to 50 Worker pods across all nodes in the GKE cluster. -In order to update the minimum number of Worker pods running at a given time, -update the `minReplicas` value with the desired number of pods. In order to update -the max number of pods to scale, update the `maxReplicas` value with the desired -number. These changes should be updated in the [turbinia-autoscale-cpu.yaml](https://github.com/google/turbinia/blob/master/k8s/common/turbinia-autoscale-cpu.yaml) -file then applied through the following command. - -- `$ kubectl replace -f turbinia-autoscale-cpu.yaml` - -### Scaling Turbinia Nodes - -Currently, Turbinia does not currently support the autoscaling of nodes in GKE. -There is a default setting of 1 node to run in the GKE cluster. In order to -update the minimum number of nodes running, update the `CLUSTER_NODE_SIZE` value -in [.clusterconfig](https://github.com/google/turbinia/blob/master/k8s/tools/.clusterconfig) -with the desired number of nodes. - -## Helpful K8s Commands - -In addition to using the update-gke-infra.sh script to manage the cluster, the -kubectl CLI can come useful for running administrative commands against the -cluster, to which you can find some useful commands below. -A verbose cheatsheet can also be found [here](https://kubernetes.io/docs/reference/kubectl/cheatsheet/). - -- Authenticating to the cluster (run this before any other kubectl commands) - - - `$ gcloud container clusters get-credentials [cluster-name] --zone [zone] --project [project-name]` - -- Get cluster events - - - `$ kubectl get events` - -- Get Turbinia pods - - - `$ kubectl get pods` - -- Get all pods (includes monitoring pods) - - - `$ kubectl get pods -A` - -- Get all pods and associated nodes - - - `$ kubectl get pods -A -o wide` - -- Get verbose related pod deployment status - - - `$ kubectl describe pod [pod-name]` - -- Get all nodes - - - `$ kubectl get nodes` - -* Get logs from specific pod - - - `$ kubectl logs [pod-name]` - -- SSH into specific pod - - - `$ kubectl exec —-stdin —-tty [pod-name] —- bash` - -- Execute command into specific pod - - - `$ kubectl exec [pod-name] —- [command]` - -- Get Turbinia ConfigMap - - - `$ kubectl get configmap turbinia-config -o json | jq '.data.TURBINIA_CONF' | xargs | base64 -d` - -- Apply k8s yaml file - - - $ `kubectl apply -f [path-to-file]` - -- Replace a k8s yaml file (updates appropriate pods) - - - $ `kubectl replace -f [path-to-file]` - -- Delete a pod - - - $ `kubectl delete pod [pod-name]` - -- Force delete all pods - - - `$ kubectl delete pods —-all —-force —-grace-period=0` - -- Get horizontal scaling numbers (hpa) - - - `$ kubectl get hpa` - -- See how busy (cpu/mem) pods are - - - `$ kubectl top pods` - -- See how busy (cpu/mem) nodes are - - - `$ kubectl top nodes` - -## GKE Load Testing - -If you'd like to perform some performance testing, troubleshooting GKE related issues, -or would like to test out a new features capability within GKE, a load test script is -available for use within `k8s/tools/load-test.sh`. Prior to running, please ensure you -review the script and update any variables for your test. Most importantly, the load test -script does not currently support the creation of test GCP disks and would need to be created -prior to running the script. By default, the script will look for GCP disks with the naming -convention of ``, `i` being a range of `1` and `MAX_DISKS`. Once test data has -been created, you can run the script on any machine or pod that has the Turbinia client -installed and configured to the correct Turbinia GKE instance. Please run the following -command to execute the load test, passing in a path to store the load test results. - -``` -./k8s/tools/load-test.sh /OUTPUT/LOADTEST/RESULTS -``` - -To check for any failed Tasks once the load test is complete. - -``` -turbinia@turbinia-controller-6bfcc5db99-sdpvg:/$ grep "Failed" -A 1 /mnt/turbiniavolume/loadtests/test-disk-25gb-* -/mnt/turbiniavolume/loadtests/test-disk-25gb-1.log:# Failed Tasks -/mnt/turbiniavolume/loadtests/test-disk-25gb-1.log-* None --- -/mnt/turbiniavolume/loadtests/test-disk-25gb-2.log:# Failed Tasks -/mnt/turbiniavolume/loadtests/test-disk-25gb-2.log-* None -``` - -To check for average run times of each request once the load test is complete. - -``` -turbinia@turbinia-controller-6bfcc5db99-sdpvg:/$ tail -n 3 /mnt/turbiniavolume/loadtests/test-disk-25gb-* -==> /mnt/turbiniavolume/loadtests/test-disk-25gb-1.log <== -real 12m7.661s -user 0m5.069s -sys 0m1.253s - -==> /mnt/turbiniavolume/loadtests/test-disk-25gb-2.log <== -real 12m7.489s -user 0m5.069s -sys 0m1.249s -``` - -To check for any issues with disks not properly mounting, within the Turbinia controller, -please trying running `losetup -a` to check attached loop devices, `lsof | grep ` -to check for any remaining file handles left on a loop device or disk. - -## GKE Metrics and Monitoring - -In order to monitor the Turbinia infrastructure within Kubernetes, -we are using the helm chart [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus) -to deploy the Prometheus stack to the cluster. This simplifies the setup required -and automatically deploys Prometheus, Grafana, and Alert Manager to the cluster -through manifest files. - -The Turbinia Server and Workers are instrumented with Prometheus code and expose -application metrics. - -- Service manifest files were created for both the Turbinia [Server](https://github.com/google/turbinia/blob/master/k8s/common/turbinia-server-metrics-service.yaml) and [Worker](https://github.com/google/turbinia/blob/master/k8s/common/turbinia-worker-metrics-service.yaml). -- The files create two services named `turbinia-server-metrics` and `turbinia-worker-metrics` which expose port 9200 to - poll application metrics. -- The Prometheus service, which is listening on port 9090 scrapes these services - for metrics. -- Grafana pulls system and application metrics from Prometheus and displays - dashboards for both os and application metrics. Grafana is listening on port 3000. - -### Connecting to Prometheus instance - -In order to connect to the Prometheus instance, go to the cloud console and -connect to the cluster using cloud shell. Then run the following command to port -forward the Prometheus service. - -- `$ kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090` - -Once port forwarding, on the top right of the cloud shell console next to -“Open Editor” there is an option for “Web Preview”. Click on that then change -the port to 9090. This should then connect you to the Prometheus instance. - -### Connecting to Grafana instance - -In order to connect to the Grafana instance, go to the cloud console and connect -to the cluster using cloud shell. Then run the following command to port forward -the Grafana service. - -- `$ kubectl --namespace monitoring port-forward svc/grafana 11111:3000` - -Once port forwarding, on the top right of the cloud shell console next to -“Open Editor” there is an option for “Web Preview”. Click on that then change -the port to 11111. This should then connect you to the Grafana instance. - -## Grafana and Prometheus config - -This section covers how to update and manage the Grafana and Prometheus instances -for adding new rules and updating the dashboard. - -### Importing a new dashboard into Grafana - -- Login to the Grafana instance -- Click the “+” sign on the left sidebar and then select “import”. -- Then copy/paste the json file from the dashboard you want to import and click “Load”. - -### Exporting a dashboard from Grafana - -- Login to Grafana -- Navigate to the dashboard you’d like to export -- From the dashboard, select the “dashboard Setting” on the upper right corner -- Click on “JSON Model” and copy the contents of the textbox. -- To import this to another dashboard, follow the steps outlined in importing a new dashboard. - -### Updating the Prometheus Config - -To update Prometheus with any additional configuration options, take the -following steps. - -- Clone the github repo [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus) - locally. -- Once cloned, navigate to the [manifests/prometheus-prometheus.yaml](https://github.com/prometheus-operator/kube-prometheus/blob/main/manifests/prometheus-prometheus.yaml) file and make any necessary changes. -- Also ensure that the additional scrape config is added back into the bottom of the file as it’s required for Prometheus to query for Turbinia metrics. - - ``` - additionalScrapeConfigs: - name: additional-scrape-configs - key: prometheus-additional.yaml - ``` - -* Once done, replace the Prometheus config file by running - - `$ kubectl --namespace monitoring replace -f manifests/prometheus-prometheus.yaml` - - Note: The updates should automatically take place - -### Updating Prometheus Rules - -To update the Prometheus rules, take the following steps. - -- Create or update an existing rule file. Please see [here](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/) for great tips on writing recording rules. -- Once your rule has been created, append the rule to the - turbinia-custom-rules.yaml file following a similar format as the other rules. - ``` - - name: [rule-name] - rules: - # Comment describing rule - - record: [record-value] - expr: [expr-value] - ``` -- Once added into the file, update the monitoring rules by running the following - - `$ kubectl --namespace monitoring replace -f turbinia-custom-rules.yaml` - -* Verify that the changes have taken place by navigating to the Prometheus - instance after a few minutes then going to Status -> Rules and searching for the - name of your newly created rule. diff --git a/docs/user/index.rst b/docs/user/index.rst index 7fd53d8cd..7ba3a715d 100644 --- a/docs/user/index.rst +++ b/docs/user/index.rst @@ -6,13 +6,7 @@ User documentation :maxdepth: 3 install - install-gke-celery - install-gke-pubsub - install-gke-monitoring - install-gke-external - install-gcp-pubsub turbinia-local-stack - gke-sre how-it-works api-server turbinia-web-ui diff --git a/docs/user/install-gcp-pubsub.md b/docs/user/install-gcp-pubsub.md deleted file mode 100644 index 98072025c..000000000 --- a/docs/user/install-gcp-pubsub.md +++ /dev/null @@ -1,157 +0,0 @@ -# **Turbinia GCP Installation Instructions** - -## Overview - -These instructions cover the PubSub installation of Turbinia using -[Google Cloud Platform](https://cloud.google.com). This uses -[terraform configs](https://github.com/forseti-security/osdfir-infrastructure) -that are part of the [Forseti Security repository](https://github.com/forseti-security) -to automate deployment of Turbinia into an existing GCP Project. - -## Installation - -The following steps can be performed on any Linux machine (Ubuntu 20.04 -recommended), and [Cloud Shell](https://cloud.google.com/shell/) is one easy way -to get a shell with access to your GCP resources. - -### GCP Project Setup - -- Create or select a Google Cloud Platform project in the - [Google Cloud Console](https://console.cloud.google.com). -- Determine which GCP zone and region that you wish to deploy Turbinia into. - Note that one of the GCP dependencies is Cloud Functions, and that only - works in certain regions, so you will need to deploy in one of - [the supported regions](https://cloud.google.com/functions/docs/locations). -- Install - [google-cloud-sdk](https://cloud.google.com/sdk/docs/quickstart-linux). - - Note: If you are doing this from cloud shell you shouldn't need this - step. -- Run `gcloud auth login` to authenticate. This may require you to copy/paste - url to browser. -- Run `gcloud auth application-default login` - -### Deploy Turbinia - -- Download the - [Terraform CLI from here](https://www.terraform.io/downloads.html). -- Clone the Forseti Security repository and change to the path containing the - configs - - `git clone https://github.com/forseti-security/osdfir-infrastructure/` - - `cd osdfir-infrastructure` -- Configuration - - - By default this will create one Turbinia server instance and one worker - instance. If you want to change the number of workers, edit the - `modules/turbinia/variables.tf` file and set the `turbinia_worker_count` - variable to the number of workers you want to deploy. - - To adjust the GCP zone and region you want to run Turbinia in, edit the - `modules/turbinia/variables.tf` file and change the `gcp_zone` and - `gcp_region` variables as appropriate to reflect your GCP project's - zone and region. - - If you want to use Docker to run Turbinia tasks, please follow the - instructions [here](using-docker.md) to enable Docker. - - Running the following commands will leave some state information under - the current directory, so if you wish to continue to manage the number - of workers via Terraform you should keep this directory for later use. - Alternatively, if you wish to store this information in GCS instead, you - can edit `main.tf` and change the `bucket` parameter to the GCS bucket - you wish to keep this state information in. See the - [Terraform documentation](https://www.terraform.io/docs/commands/index.html) - for more information. - - The current configuration does not enable alert notifications by default. - Please see [here](#grafana-smtp-setup) for instructions. - - If you are running multiple workers on a given host and within containers, ensure - that you are mapping the host `OUTPUT_DIR` path specified in the configuration file - `.turbiniarc` to the containers so that they can properly update the `RESOURCE_STATE_FILE`. - -- Initialize terraform and apply the configuration - - `./deploy.sh --no-timesketch` - - If the `--no-timesketch` parameter is not supplied, Terraform will also - create a [Timesketch](http://timesketch.org) instance in the same - project, and this can be configured to ingest Turbinia timeline - output and report data. See the - [Documentation on this](https://github.com/forseti-security/osdfir-infrastructure) - for more details. - - When prompted for the project name, enter the project you selected - during setup. - -This should result in the appropriate cloud services being enabled and -configured and GCE instances for the server and the worker(s) being started and -configured. The Turbinia configuration file will be deployed on these instances -as `etc/turbinia/turbinia.conf`. If you later want to increase the number of -workers, you can edit the `turbinia/variables.tf` file mentioned above and -re-run `terraform apply` -To use Turbinia you can use the virtual environment that was setup by -the `deploy.sh` script.To activate the virtual environment, run the following -command `source ~/turbinia/bin/activate` and then use `turbiniactl`. For more -information on how to use Turbinia please visit [the user manual](https://github.com/google/turbinia). - -### Client configuration (optional) - -If you want to use the command line tool, you can SSH into the server and run -`turbiniactl` from there. The `turbiniactl` command can be used to submit -Evidence for processing or see the status of existing and previous processing -requests. If you'd prefer to use turbiniactl on a different machine, follow the -following instructions to configure the client. The instructions are based on -using Ubuntu 20.04, though other versions of Linux should be compatible. - -- Follow the steps from GCP Project setup above to install the SDK and - authenticate with gcloud. -- Install some python tooling: - - `apt-get install python3-pip python3-wheel` -- Install the Turbinia client. - - Note: You may want to install this into a virtual-environment with - [venv](https://docs.python.org/3.7/library/venv.html) or - [pipenv](https://pipenv.pypa.io/en/latest/)) to reduce potential - dependency conflicts and isolate these packages into their own - environment. - - `pip3 --user install turbinia` -- If running on the same machine you deployed Turbinia from, you can generate - the config with terraform - - `terraform output turbinia-config > ~/.turbiniarc` -- Otherwise, if you are running from a different machine you'll need to copy - the Turbinia config from the original machine, or from the Turbinia server - from `/etc/turbinia/turbinia.conf`. - -### Grafana SMTP Setup - -If you want to receive alert notifications from Grafana, you'll need to setup a SMTP server for Grafana. To configure a SMTP server, you need to add the following environment variables to `Grafana` `env` section in `osdfir-infrastructure/modules/monitoring/main.tf` - -``` - { - name = "GF_SMTP_ENABLED" - value = "true" - }, { - name = "GF_SMTP_HOST" - value = "smtp.gmail.com:465" # Replace this if you're not using gmail - }, { - name = "GF_SMTP_USER" - value = "" - }, { - name = "GF_SMTP_PASSWORD" - value = "" - }, { - name = "GF_SMTP_SKIP_VERIFY" - value = "true" - }, { - name = "GF_SMTP_FROM_ADDRESS" - value = "" - } - -``` - ---- - -> **NOTE** - -> By default Gmail does not allow [less secure apps](https://support.google.com/accounts/answer/6010255) to authenticate and send emails. For that reason, you'll need to allow less secure apps to access the provided Gmail account. - ---- - -Once completed: - -- login to the Grafana Dashboard. -- Select Alerting and choose "Notification channels". -- Fill the required fields and add the email addresses that will receive notification. -- Click "Test" to test your SMTP setup. -- Once everything is working, click "Save" to save the notification channel. diff --git a/docs/user/install-gke-celery.md b/docs/user/install-gke-celery.md deleted file mode 100644 index 3b48a9d50..000000000 --- a/docs/user/install-gke-celery.md +++ /dev/null @@ -1,107 +0,0 @@ -# Turbinia GKE Celery Installation Instructions - -## Introduction - -In this guide, you will learn how to deploy the Redis implementation of Turbinia using [Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine). - -GKE allows Turbinia workers to scale based on processing demand. Currently by scaling based on CPU utilization of Turbinia workers. The GKE architecture closely resembles the [cloud architecture](how-it-works.md). - -At the end of this guide, you will have a newly provisioned GKE cluster, a GCP Filestore instance to store logs -centrally to, a Turbinia GCP service account for metric collection and attaching GCP Disks, and lastly Turbinia -locally running within the cluster. - -### Prerequisites - -- A Google Cloud Account and a project to work from -- The ability to create GCP resources and service accounts -- `gcloud` and `kubectl` locally installed - -## Deployment - -This section covers the steps for deploying a Turbinia GKE environment. - -### Deploying the Turbinia cluster - -- Create or select a Google Cloud Platform project in the - [Google Cloud Console](https://console.cloud.google.com). -- Determine which GCP zone and region that you wish to deploy Turbinia into. -- Review the `.clusterconfig` config file located in `k8s/tools` and please update any of the default values if necessary based on cluster requirements. -- Deploy through the following command: - - `./k8s/tools/deploy-celery-gke.sh` - - **Note this script will create a GKE cluster and GCP resources then deploy Turbinia to the cluster** -- Congrats, you have successfully deployed Turbinia into GKE! In order to make requests into Turbinia at this stage see Making requests locally section below or if you'd like to set up external access to Turbinia via a URL see [install-gke-external](install-gke-external.md). - -### Destroying the Turbinia cluster - -- Run the following command if you'd like to destroy the Turbinia GKE environment: - - `./k8s/tools/destroy-celery-gke.sh` - - **Note this will delete the Turbinia cluster including all processed output and log files as well as associated GCP resources** - -### Networks listed - -The following ports will be exposed as part of deployment: - -- 9200 - To collect Prometheus metrics from the Turbinia endpoints. -- 8000 - the Turbinia API Service and Web UI. -- 8080 - the Oauth2 Proxy Service. - -## Making requests local to the cluster - -If you have not set up external access to Turbinia, you can make a request through the following steps. - -- Connect to the cluster: - -``` -gcloud container clusters get-credentials --zone --project -``` - -- Forward the Turbinia service port locally to your machine: - -``` -kubectl port-forward service/turbinia-api-service 8000:8000 -``` - -- Install the Turbinia client locally on your machine or in a cloud shell console: - -``` -pip3 install turbinia-api-lib -``` - -- Create a processing request via: - -``` -turbinia-client submit GoogleCloudDisk --project --disk_name --zone -``` - -- You can access the Turbinia Web UI via: - -``` -http://localhost:8000 -``` - -## Making requests within a pod in the cluster - -You may also make requests directly from a pod running within the cluster through -the following steps. - -- Connect to the cluster: - -``` -gcloud container clusters get-credentials --zone --project -``` - -- Get a list of running pods: - -``` -kubectl get pods -``` - -- Identify the pod named `turbinia-server-*` or `turbinia-controller-*` and exec into it via: - -``` -kubectl exec --stdin --tty [CONTAINER-NAME] -- bash -``` - -## Monitoring Installation - -Turbinia GKE has the capability to be monitored through Prometheus and Grafana. Please follow the steps outlined under the Monitoring Installation section [here](install-gke-monitoring.md). diff --git a/docs/user/install-gke-external.md b/docs/user/install-gke-external.md deleted file mode 100644 index 5b3d19d7b..000000000 --- a/docs/user/install-gke-external.md +++ /dev/null @@ -1,252 +0,0 @@ -# Turbinia External Access and Authentication Instructions - -## Introduction - -In this guide you will learn how to externally expose the Turbinia API Server and -Web UI. This guide is recommended for users who have already [deployed Turbinia to -a GKE cluster](install-gke-external.md), but would like to access the API Server and -Web UI through an externally available URL instead of port forwarding the -Turbinia service from the cluster. - -### Prerequisites - -- A Google Cloud Account and a GKE cluster with Turbinia deployed -- The ability to create GCP resources -- `gcloud` and `kubectl` locally installed on your machine - -## Deployment - -Please follow the steps below for configuring Turbinia to be externally accessible. - -### 1. Create a static external IP address - -- Create a global static IP address as follows: - -``` -gcloud compute addresses create turbinia-webapps --global -``` - -- You should see the new IP address listed: - -``` -gcloud compute addresses list -``` - -Please see [Configuring an ipv6 address](#configuring-an-ipv6-address) if you -need an ipv6 address instead. - -### 2. Set up domain and DNS - -You will need a domain to host Turbinia on. You can either register a new domain in a registrar -of your choice or use a pre-existing one. - -#### Registration through GCP - -To do so through GCP, search for a domain that you want to register: - -``` -gcloud domains registrations search-domains SEARCH_TERMS -``` - -If the domain is available, register the domain: - -``` -gcloud domains registrations register -``` - -#### Registration through Google Domains - -If you would like to register a domain and update its DNS record through Google -Domains instead, follow the instructions provided [here](https://cert-manager.io/docs/tutorials/getting-started-with-cert-manager-on-google-kubernetes-engine-using-lets-encrypt-for-ingress-ssl/#4-create-a-domain-name-for-your-website). - -#### External Registrar and DNS provider - -You will need to create a DNS `A` record pointing to the external IP address created -above, either through the external provider you registered the domain from -or through GCP as shown below. - -First create the managed DNS zone, replacing the `--dns-name` flag with the domain you registered: - -``` - gcloud dns managed-zones create turbinia-dns --dns-name --description "Turbinia managed DNS" -``` - -Then add the DNS `A` record pointing to the external IP address: - -``` -gcloud dns record-sets create --zone="turbinia-dns" --type="A" --ttl="300" --rrdatas="EXTERNAL_IP" -``` - -DNS can instead be managed through [ExternalDNS](https://github.com/kubernetes-sigs/external-dns), however setup is outside the scope of this guide. - -### 3. Create Oauth2 Application IDs - -Authentication is handled by a proxy utility named [Oauth2 Proxy](https://oauth2-proxy.github.io/oauth2-proxy/). This guide will walk through configuring the Oauth2 Proxy with Google Oauth, -however there are alternative [providers](https://oauth2-proxy.github.io/oauth2-proxy/docs/configuration/oauth_provider) that you may configure instead. - -Two sets of Oauth credentials will be configured as part of this deployment. -One that will be for the Web client and one for the API/Desktop client. - -To create the Web Oauth credentials, take the following steps: - -1. Go to the [Credentials page](https://console.developers.google.com/apis/credentials). -2. Click Create credentials > OAuth client ID. -3. Select the `Web application` application type. -4. Fill in an appropriate Application name. -5. Fill in Authorized JavaScript origins with your domain as `https://` -6. Fill in Authorized redirect URIs with `https:///oauth2/callback` -7. Please make a note of the generated `Client ID` and `Client Secret` for later use. - -To create the API/Desktop Oauth credentials, take the following steps: - -1. Go to the [Credentials page](https://console.developers.google.com/apis/credentials). -2. Click Create credentials > OAuth client ID. -3. Select the `Desktop or Native application` application type. -4. Fill in an appropriate application name. -5. Please make a note of the generated `Client ID` and `Client Secret` for later use. - -You will then need to generate a cookie secret for later use: - -``` -python3 -c 'import os,base64; print(base64.urlsafe_b64encode(os.urandom(32)).decode())' -``` - -With the Turbinia repository cloned to your local machine, cd into the directory we'll be working from: - -``` -wyassine@wyassine:~/turbinia$ cd k8s/tools/ -``` - -Then make a copy of the `oauth2_proxy.cfg` template: - -``` -wyassine@wyassine:~/turbinia/k8s/tools$ cp ../../docker/oauth2_proxy/oauth2_proxy.cfg . -``` - -Edit the `oauth2_proxy.cfg` file and replace the following: - -- `CLIENT_ID`: The web client id -- `CLIENT_SECRET`: The web client secret -- `OIDC_EXTRA_AUDIENCES`: The native client id -- `UPSTREAMS`: The domain name registered, ex: `upstreams = ['https://]` -- `REDIRECT_URL`: The redirect URI you registered ex: `https:///oauth2/callback` -- `COOKIE_SECRET`: The cookie secret you generated above -- `EMAIl_DOMAINS`: The email domain name you'd allow to authenticate ex: `yourcompany.com` - -Now base64 encode the config file: - -``` -base64 -w0 oauth2_native.cfg > oauth2_native.b64 -``` - -Then to deploy the config to the cluster - -``` -kubectl create configmap oauth2-config --from-file=OAUTH2_CONF=oauth2_native.b64 -``` - -Create a file named `auth.txt` in your working directory and append a list of emails -you'd like to allow access to the Turbinia app, one email per line. Once complete base64 encode: - -``` -base64 -w0 auth.txt > auth.b64 -``` - -Then deploy the config to the cluster: - -``` -kubectl create configmap auth-config --from-file=OAUTH2_AUTH_EMAILS=auth.b64 -``` - -Lastly, deploy the Oauth2 Proxy to the cluster: - -``` -kubectl create -f ../celery/turbinia-oauth2-proxy.yaml -``` - -### 4. Deploy the Load Balancer and Managed SSL - -In the final step, edit `turbinia-ingress.yaml` located in the `k8s/celery` directory -and replace the two placeholders `` with the domain you configured. Save -the file then deploy it to the cluster: - -``` -kubectl create -f ../celery/turbinia-ingress.yaml -``` - -Within 10 minutes all the load balancer components should be ready and you should -be able to externally connect to the domain name you configured. Additionally, you can check on the status of the load balancer via: - -``` -kubectl describe ingress turbinia-ingress -``` - -Congrats, you have now successfully configured Turbinia to be externally accessible! - -## Making Turbinia processing requests - -Once Turbinia is externally accessible, download the Oauth Desktop credentials -created above to your machine and install the command-line Turbinia client: - -``` -pip3 install turbinia-client -``` - -or Python client library: - -``` -pip3 install turbinia-api-lib -``` - -- To create a processing request for evidence run the following: - -``` -turbinia-client submit googleclouddisk --project --disk_name --zone -``` - -- To access the Turbinia Web UI, point your browser to: - -``` -https:// -``` - -## Additional networking topics - -### Configuring an ipv6 address - -Please follow these steps if your environment requires an ipv6 address to be -configured instead. - -- Create a ipv6 global static IP address as follows: - -``` -gcloud compute addresses create turbinia-webapps --global --ip-version ipv6 -``` - -- Then add the DNS `AAAA` record pointing to the ipv6 address as follows: - -``` -gcloud dns record-sets create --zone="turbinia-dns" --type="AAAA" --ttl="300" --rrdatas="IPV6_ADDRESS" -``` - -- In the final step, edit `turbinia-ingress.yaml` located in the `k8s/celery` directory - and replace the two placeholders `` with the domain you configured. Save - the file then deploy it to the cluster: - -``` -kubectl create -f ../celery/turbinia-ingress.yaml -``` - -### Egress Connectivity for Nodes - -By default, the deployment script will bootstrap a private GKE cluster. This prevents -nodes from having an external IP address to send and receive external traffic from and -traffic will only be allowed through the deployed load balancer. - -In cases where nodes require external network connectivity or egress to retrieve external -helm and software packages, you'll need to create a [GCP NAT router](https://cloud.google.com/nat/docs/gke-example#create-nat). This allows traffic to be routed externally from the cluster -nodes to the NAT router and then externally while denying inbound traffic, allowing the cluster -nodes to stay private. - -One use case where this may come up is if you choose to deploy ExternalDNS or Certmanager -to the cluster instead of the GCP equivalent for DNS and certificate management. diff --git a/docs/user/install-gke-monitoring.md b/docs/user/install-gke-monitoring.md deleted file mode 100644 index f77af7521..000000000 --- a/docs/user/install-gke-monitoring.md +++ /dev/null @@ -1,119 +0,0 @@ -## **Monitoring Installation** - -Turbinia GKE has the capability to be monitored through Prometheus and Grafana. Please follow these steps for configuring Turbinia for monitoring and ensure that the `.turbiniarc` config file has been updated appropriately. - -### Application Metrics - -In order to receive Turbinia application metrics, you'll need to adjust the following variables in the `.turbinarc` config file. - -``` -PROMETHEUS_ENABLED = True -PROMETHEUS_ADDR = '0.0.0.0' -PROMETHEUS_PORT = 9200 -``` - -Please ensure `PROMETHEUS_ENABLED` is set to `True` and that the `PROMETHEUS_PORT` matches the `prometheus.io/port` section in the `turbinia-worker.yaml` and `turbinia-server.yaml` as well as matching ports in the `turbinia-server-metrics-service.yaml` and `turbinia-worker-metrics-service.yaml` GKE deployment files. - -### Deploying Prometheus - -In this deployment method, we are using [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus) to deploy the Prometheus stack to the cluster. This simplifies the setup required and automatically deploys Prometheus, Grafana, and Alert Manager to the cluster through manifest files. Before proceeding with the setup, please ensure you are connected to the cluster with Turbinia deployed and can run commands via `kubectl`, then proceed to the following steps to configure Prometheus with Turbinia. - -- Clone the github repo [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus) locally. Please ensure that the branch cloned is compatible with your Kubernetes cluster version else you may run into issues. Please see the [Compatibility Matrix](https://github.com/prometheus-operator/kube-prometheus) section of the repo for more details. -- Once cloned, run the following commands to deploy the stack - - `kubectl create -f manifests/setup` - - `kubectl create -f manifests/` -- Create a secret from the file `prometheus-additional.yaml` located in the Turbinia folder. - - `kubectl create secret generic additional-scrape-configs --from-file=monitoring/k8s/prometheus/prometheus-additional.yaml --dry-run=client -oyaml > additional-scrape-configs.yaml` -- You will then need to update the `prometheus-prometheus.yaml` file located in the `kube-prometheus/manifests` folder with this extra scrape config - ``` - additionalScrapeConfigs: - name: additional-scrape-configs - key: prometheus-additional.yaml - ``` -- Once complete apply the changes made through the following commands - - `kubectl -n monitoring apply -f additional-scrape-configs.yaml` - - `kubectl -n monitoring apply -f manifests/prometheus-prometheus.yaml` -- To import Turbinia custom rules, run the `gen-yaml.sh` script from the same directory its located - - `cd monitoring/k8s && ./gen-yaml.sh` -- Then apply the `turbinia-custom-rules.yaml` file - - `kubectl -n monitoring apply -f monitoring/k8s/prometheus/turbinia-custom-rules.yaml` - -### Testing Prometheus Deployment - -- Test that the changes were properly made by connecting to the Prometheus console and searching for `turbinia`. If related metrics pop up in the search bar, then Turbinia metrics are properly being ingested by Prometheus. You can also check to see if the Turbinia custom rules have been applied by navigating to Status -> Rules then searching for one of the custom rule names. To connect to the Prometheus console, run the following command - - - `kubectl -n monitoring port-forward svc/prometheus-k8s 9090` - -- To delete the monitoring stack, cd into the `kube-prometheus` directory and run the following command. - - `kubectl delete --ignore-not-found=true -f manifests/ -f manifests/setup` - -### Deploying Grafana - -Before proceeding to the Grafana setup, please ensure that you have followed all the steps outlined in the **Testing Prometheus Deployment** section. - -- Clone the github repo [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus) locally. -- You will then need to update `manifests/grafana-deployment.yaml` file, first by updating the `volumeMounts` section with the following `mountPaths` - ``` - - mountPath: /grafana-dashboard-definitions/0/turbinia-healthcheck-metrics - name: turbinia-healthcheck-metrics - readOnly: false - - mountPath: /grafana-dashboard-definitions/0/turbinia-application-metrics - name: turbinia-application-metrics - readOnly: false - ``` -- Then by updating the `volumes` section with the following `configMaps` - ``` - - configMap: - name: turbinia-application-metrics - name: turbinia-application-metrics - - configMap: - name: turbinia-healthcheck-metrics - name: turbinia-healthcheck-metrics - ``` -- Once complete, apply the changes through - - `kubectl -n monitoring apply -f manifests/grafana-deployment.yaml` -- To get the Turbinia Application & Healthcheck dashboard to show, first run the `gen.yaml.sh` if haven't done so already in the setting up Prometheus section. - - `cd monitoring/k8s && ./gen-yaml.sh` -- Then apply the dashboards to the monitoring namespace. - - `kubectl -n monitoring apply -f monitoring/k8s/grafana` -- To connect to the Grafana dashboard, run the following command - - `kubectl -n monitoring port-forward svc/grafana 11111:3000` - -### Email Notifications - -If you want to receive alert notifications from Grafana, you'll need to setup a SMTP server for Grafana. To configure a SMTP server, you need to add the following environment variables to the `env` section of the `manifests/grafana-deployment.yaml` file. - -``` -- name: GF_SMTP_ENABLED - value: "true" -- name: GF_SMTP_HOST - value: "smtp.gmail.com:465" #Replace this if you're not using gmail -- name: GF_SMTP_USER - value: "" -- name: GF_SMTP_PASSWORD - value: "" -- name: GF_SMTP_SKIP_VERIFY - value: "true" -- name: GF_SMTP_FROM_ADDRESS - value: "" -``` - -Then apply the changes through the following command - -- `kubectl -n monitoring apply -f manifests/grafana-deployment.yaml` - ---- - -> **NOTE** - -> By default Gmail does not allow [less secure apps](https://support.google.com/accounts/answer/6010255) to authenticate and send emails. For that reason, you'll need to allow less secure apps to access the provided Gmail account. - ---- - -Once completed: - -- login to the Grafana Dashboard. -- Select Alerting and choose "Notification channels". -- Fill the required fields and add the email addresses that will receive notification. -- Click "Test" to test your SMTP setup. -- Once everything is working, click "Save" to save the notification channel. diff --git a/docs/user/install-gke-pubsub.md b/docs/user/install-gke-pubsub.md deleted file mode 100644 index db871b599..000000000 --- a/docs/user/install-gke-pubsub.md +++ /dev/null @@ -1,47 +0,0 @@ -# Turbinia GKE Quick Installation Instructions - -## **Introduction** - -These instructions cover the PubSub installation Turbinia using [Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine). - -Installing into GKE allows Turbinia Workers to scale based on processing demand. Currently, this is done through scaling on CPU utilization, which is determined when available Turbinia Workers process Tasks and reach a pre-defined CPU threshold. The GKE architecture closely resembles the [cloud architecture](how-it-works.md) with GKE being used to scale Turbinia Woker pods. - -All steps in this document are required for getting Turbinia running on GKE. - -### **Prerequisites** - -GKE is only supported for Google Cloud so a Google Cloud Project is required to work from. - -## **Installation** - -Please follow these steps for deploying Turbinia to GKE. Ensure that the `.clusterconfig` config file has been updated appropriately. - -### **Turbinia GKE Deployment** - -**Follow these steps to deploy Turbinia to GKE.** - -- Create or select a Google Cloud Platform project in the - [Google Cloud Console](https://console.cloud.google.com). -- Determine which GCP zone and region that you wish to deploy Turbinia into. - Note that one of the GCP dependencies is Cloud Functions, and that only - works in certain regions, so you will need to deploy in one of - [the supported regions](https://cloud.google.com/functions/docs/locations). -- Review the `.clusterconfig` config file and please update any of the default values if necessary based on requirements. -- Deploy Turbinia through the following command - - `./k8s/tools/deploy-pubsub-gke.sh` -- The deployment script will automatically enable GCP APIs, create the cluster and GCP resources then deploy Turbinia to the cluster. At the end of the run, you should have a fully functioning Turbinia environment within GKE to use. -- Run the following command if you'd like to cleanup the newly created Turbinia environment - - `./k8s/tools/destroy-pubsub-gke.sh` - - **Note this will delete the Turbinia cluster including all processed output and log files as well as associated GCP resources** - -### **Making processing requests in GKE** - -- You can either make requests via setting up a local `turbiniactl` client or through connecting to the server through the following steps. -- Connect to cluster through `gcloud container clusters get-credentials --zone --project `. -- Use `kubectl get pods` to get a list of running pods. -- Identify the pod named `turbinia-server-*` and exec into it via `kubectl exec --stdin --tty [CONTAINER-NAME] -- bash` -- Use `turbiniactl` to kick off a request to process evidence. - -## **Monitoring Installation** - -Turbinia GKE has the capability to be monitored through Prometheus and Grafana. Please follow the steps outlined under the Monitoring Installation section [here](install-gke-monitoring.md). diff --git a/docs/user/install.md b/docs/user/install.md index e1e4f3177..631112c00 100644 --- a/docs/user/install.md +++ b/docs/user/install.md @@ -1,6 +1,6 @@ **Note**: **_This installation method will be deprecated by the end of 2022. The current recommended method for installing Turbinia is -[here](install-gke-pubsub.md)._** +[here](https://github.com/google/osdfir-infrastructure)._** # **Turbinia Quick Installation Instructions** diff --git a/k8s/celery/destroy-celery.sh b/k8s/celery/destroy-celery.sh deleted file mode 100755 index e26540a48..000000000 --- a/k8s/celery/destroy-celery.sh +++ /dev/null @@ -1,21 +0,0 @@ -#!/bin/sh -# Turbinia GKE Celery/Redis destroy script. -# This script can be used to destroy the Turbinia Celery/Redis deployment in GKE. -# Please use the destroy-celery-gke.sh script if you'd like to also delete -# the cluster and other GCP resources created as part of the deployment. -# Requirements: -# - have 'gcloud' and 'kubectl' installed. -# - authenticate against your GKE cluster with "gcloud container clusters get-credentials" - -kubectl delete configmap turbinia-config -kubectl delete -f redis-server.yaml -kubectl delete -f redis-service.yaml -kubectl delete -f turbinia-autoscale-cpu.yaml -kubectl delete -f turbinia-server-metrics-service.yaml -kubectl delete -f turbinia-worker-metrics-service.yaml -kubectl delete -f turbinia-worker.yaml -kubectl delete -f turbinia-server.yaml -kubectl delete -f turbinia-api-service.yaml -kubectl delete -f turbinia-api-server.yaml -kubectl delete -f turbinia-volume-claim-filestore.yaml -kubectl delete -f turbinia-volume-filestore.yaml \ No newline at end of file diff --git a/k8s/celery/redis-server.yaml b/k8s/celery/redis-server.yaml deleted file mode 100644 index 48c3a09fa..000000000 --- a/k8s/celery/redis-server.yaml +++ /dev/null @@ -1,53 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: redis-server - labels: - app: redis-server -spec: - replicas: 1 - selector: - matchLabels: - app: redis-server - template: - metadata: - labels: - app: redis-server - spec: - automountServiceAccountToken: false - securityContext: - seccompProfile: - type: RuntimeDefault - initContainers: - - name: init-filestore - image: busybox:1.28 - command: ["sh", "-c", "echo never > /host-sys/kernel/mm/transparent_hugepage/enabled"] - volumeMounts: - - mountPath: /mnt/turbiniavolume - name: turbiniavolume - - mountPath: /host-sys - name: host-sys - containers: - - name: redis - image: "docker.io/redis:latest" - args: ["--appendonly", "yes", "--save", "30", "1", "--client-output-buffer-limit", "pubsub", "268435456", "67108864", "0"] - workingDir: /mnt/turbiniavolume/redis - resources: - requests: - cpu: 2000m - memory: 2000Mi - ports: - - containerPort: 6379 - volumeMounts: - - mountPath: /mnt/turbiniavolume - name: turbiniavolume - securityContext: - readOnlyRootFilesystem: true - volumes: - - name: turbiniavolume - persistentVolumeClaim: - claimName: turbiniavolume-claim - readOnly: false - - name: host-sys - hostPath: - path: /sys \ No newline at end of file diff --git a/k8s/celery/redis-service.yaml b/k8s/celery/redis-service.yaml deleted file mode 100644 index c5e46b3ef..000000000 --- a/k8s/celery/redis-service.yaml +++ /dev/null @@ -1,12 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: redis - labels: - app: redis-server -spec: - ports: - - port: 6379 - targetPort: 6379 - selector: - app: redis-server \ No newline at end of file diff --git a/k8s/celery/setup-celery.sh b/k8s/celery/setup-celery.sh deleted file mode 100755 index c4c7a3cfa..000000000 --- a/k8s/celery/setup-celery.sh +++ /dev/null @@ -1,31 +0,0 @@ -#!/bin/sh -# Turbinia GKE Celery/Redis deployment script. -# This script can be used to deploy Turbinia configured with Celery/Redis to GKE. -# Please use the deploy-celery-gke.sh script if you'd also like to create -# the GKE cluster and associated GCP resources required by Turbinia. -# Requirements: -# - have 'gcloud' and 'kubectl' installed. -# - authenticate against your GKE cluster with "gcloud container clusters get-credentials" - -TURBINIA_CONF=$1 -if [ -z $1 ]; then - echo "No config found as parameter, please specify a Turbinia config file." - exit 0 -fi - -base64 -w0 $TURBINIA_CONF > turbinia-config.b64 -kubectl create configmap turbinia-config --from-file=TURBINIA_CONF=turbinia-config.b64 -kubectl create -f turbinia-volume-filestore.yaml -kubectl create -f turbinia-volume-claim-filestore.yaml -kubectl create -f redis-server.yaml -kubectl create -f redis-service.yaml -kubectl rollout status -w deployment/redis-server -kubectl create -f turbinia-server.yaml -kubectl create -f turbinia-worker.yaml -kubectl create -f turbinia-api-server.yaml -kubectl create -f turbinia-api-service.yaml -kubectl create -f turbinia-server-metrics-service.yaml -kubectl create -f turbinia-worker-metrics-service.yaml -kubectl create -f turbinia-autoscale-cpu.yaml - -echo "Turbinia deployment complete" diff --git a/k8s/celery/turbinia-api-server.yaml b/k8s/celery/turbinia-api-server.yaml deleted file mode 100644 index be088d5d6..000000000 --- a/k8s/celery/turbinia-api-server.yaml +++ /dev/null @@ -1,72 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: turbinia-api-server - labels: - app: turbinia-api-server -spec: - replicas: 1 - selector: - matchLabels: - app: turbinia-api-server - template: - metadata: - annotations: - prometheus.io/port: "9200" - prometheus.io/scrape: "true" - labels: - app: turbinia-api-server - spec: - serviceAccountName: turbinia - automountServiceAccountToken: false - securityContext: - runAsNonRoot: true - seccompProfile: - type: RuntimeDefault - runAsUser: 999 - fsGroup: 999 - fsGroupChangePolicy: "OnRootMismatch" - containers: - - name: api - image: us-docker.pkg.dev/osdfir-registry/turbinia/release/turbinia-api-server:latest - env: - - name: TURBINIA_CONF - valueFrom: - configMapKeyRef: - name: turbinia-config - key: TURBINIA_CONF - - name: NODE_NAME - valueFrom: - fieldRef: - fieldPath: spec.nodeName - volumeMounts: - - mountPath: /mnt/turbiniavolume - name: turbiniavolume - - mountPath: /etc/turbinia - name: conf - - mountPath: /var/log - name: logs - ports: - - containerPort: 9200 - - containerPort: 8000 - resources: - requests: - memory: "4096Mi" - cpu: "2000m" - limits: - memory: "16384Mi" - cpu: "4000m" - securityContext: - readOnlyRootFilesystem: true - runAsNonRoot: true - allowPrivilegeEscalation: false - runAsUser: 999 - volumes: - - name: turbiniavolume - persistentVolumeClaim: - claimName: turbiniavolume-claim - readOnly: false - - name: conf - emptyDir: {} - - name: logs - emptyDir: {} \ No newline at end of file diff --git a/k8s/celery/turbinia-api-service.yaml b/k8s/celery/turbinia-api-service.yaml deleted file mode 100644 index 4ae6f7ac4..000000000 --- a/k8s/celery/turbinia-api-service.yaml +++ /dev/null @@ -1,16 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: turbinia-api-service - labels: - app: turbinia-api-service -spec: - ports: - - name: svc - port: 8000 - targetPort: 8000 - - name: metrics - port: 9200 - targetPort: 9200 - selector: - app: turbinia-api-server \ No newline at end of file diff --git a/k8s/celery/turbinia-iap.yaml b/k8s/celery/turbinia-iap.yaml deleted file mode 100644 index f019f4fda..000000000 --- a/k8s/celery/turbinia-iap.yaml +++ /dev/null @@ -1,9 +0,0 @@ -apiVersion: cloud.google.com/v1 -kind: BackendConfig -metadata: - name: turbinia-iap -spec: - iap: - enabled: true - oauthclientCredentials: - secretName: oauth-secret \ No newline at end of file diff --git a/k8s/celery/turbinia-ingress.yaml b/k8s/celery/turbinia-ingress.yaml deleted file mode 100644 index 895a2aafb..000000000 --- a/k8s/celery/turbinia-ingress.yaml +++ /dev/null @@ -1,57 +0,0 @@ -apiVersion: cloud.google.com/v1 -kind: BackendConfig -metadata: - name: turbinia-neg-healthcheck -spec: - timeoutSec: 300 - healthCheck: - checkIntervalSec: 5 - timeoutSec: 5 - healthyThreshold: 2 - unhealthyThreshold: 2 - type: HTTP - requestPath: /ping - port: 8080 ---- -apiVersion: networking.gke.io/v1beta1 -kind: FrontendConfig -metadata: - name: turbinia-loadbalancer-frontend-config -spec: - redirectToHttps: - enabled: true ---- -apiVersion: networking.gke.io/v1 -kind: ManagedCertificate -metadata: - name: turbinia-loadbalancer-managed-ssl -spec: - domains: - - ---- -apiVersion: networking.k8s.io/v1 -kind: Ingress -metadata: - name: turbinia-ingress - annotations: - kubernetes.io/ingress.global-static-ip-name: "turbinia-webapps" - networking.gke.io/managed-certificates: turbinia-loadbalancer-managed-ssl - networking.gke.io/v1beta1.FrontendConfig: turbinia-loadbalancer-frontend-config - kubernetes.io/ingress.class: "gce" -spec: - rules: - - host: - http: - paths: - - path: / - pathType: Prefix - backend: - service: - name: turbinia-oauth2-service - port: - number: 8080 - defaultBackend: - service: - name: turbinia-oauth2-service # Name of the Service targeted by the Ingress - port: - number: 8080 # Should match the port used by the Service \ No newline at end of file diff --git a/k8s/celery/turbinia-oauth2-proxy.yaml b/k8s/celery/turbinia-oauth2-proxy.yaml deleted file mode 100644 index 8237f7256..000000000 --- a/k8s/celery/turbinia-oauth2-proxy.yaml +++ /dev/null @@ -1,78 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: turbinia-oauth2-proxy - labels: - app: turbinia-oauth2-proxy -spec: - replicas: 1 - selector: - matchLabels: - app: turbinia-oauth2-proxy - template: - metadata: - labels: - app: turbinia-oauth2-proxy - spec: - automountServiceAccountToken: false - securityContext: - runAsNonRoot: true - runAsUser: 999 - seccompProfile: - type: RuntimeDefault - containers: - - name: oauth2-proxy - image: us-docker.pkg.dev/osdfir-registry/turbinia/release/turbinia-oauth2:latest - env: - - name: OAUTH2_CONF - valueFrom: - configMapKeyRef: - name: oauth2-config - key: OAUTH2_CONF - - name: OAUTH2_AUTH_EMAILS - valueFrom: - configMapKeyRef: - name: auth-config - key: OAUTH2_AUTH_EMAILS - ports: - - containerPort: 8080 - - containerPort: 9200 - resources: - requests: - memory: "256Mi" - cpu: "500m" - limits: - memory: "8192Mi" - cpu: "4000m" - volumeMounts: - - name: tmp - mountPath: /tmp - - name: conf - mountPath: /etc/turbinia - securityContext: - readOnlyRootFilesystem: true - runAsNonRoot: true - allowPrivilegeEscalation: false - runAsUser: 999 - volumes: - - name: tmp - emptyDir: {} - - name: conf - emptyDir: {} ---- -apiVersion: v1 -kind: Service -metadata: - name: turbinia-oauth2-service - annotations: - cloud.google.com/neg: '{"ingress": true}' - cloud.google.com/backend-config: '{"ports": {"8080":"turbinia-neg-healthcheck"}}' -spec: - type: ClusterIP - selector: - app: turbinia-oauth2-proxy - ports: - - name: oauth2-http - port: 8080 - protocol: TCP - targetPort: 8080 \ No newline at end of file diff --git a/k8s/common/turbinia-autoscale-cpu.yaml b/k8s/common/turbinia-autoscale-cpu.yaml deleted file mode 100644 index 4684688df..000000000 --- a/k8s/common/turbinia-autoscale-cpu.yaml +++ /dev/null @@ -1,12 +0,0 @@ -apiVersion: autoscaling/v1 -kind: HorizontalPodAutoscaler -metadata: - name: turbinia-worker-autoscaling -spec: - scaleTargetRef: - apiVersion: apps/v1 - kind: Deployment - name: turbinia-worker - minReplicas: 5 - maxReplicas: 400 - targetCPUUtilizationPercentage: 90 diff --git a/k8s/common/turbinia-controller.yaml b/k8s/common/turbinia-controller.yaml deleted file mode 100644 index 2ba2fb3f5..000000000 --- a/k8s/common/turbinia-controller.yaml +++ /dev/null @@ -1,69 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: turbinia-controller - labels: - app: turbinia-controller -spec: - replicas: 1 - selector: - matchLabels: - app: turbinia-controller - template: - metadata: - annotations: - prometheus.io/port: "9200" - prometheus.io/scrape: "true" - labels: - app: turbinia-controller - spec: - serviceAccountName: turbinia - # The grace period needs to be set to the largest task timeout as - # set in the turbinia configuration file. - initContainers: - - name: init-filestore - image: busybox:1.28 - command: ['sh', '-c', 'chmod go+w /mnt/turbiniavolume'] - volumeMounts: - - mountPath: "/mnt/turbiniavolume" - name: turbiniavolume - containers: - - name: controller - image: us-docker.pkg.dev/osdfir-registry/turbinia/release/turbinia-controller:latest - securityContext: - privileged: true - env: - - name: TURBINIA_CONF - valueFrom: - configMapKeyRef: - name: turbinia-config - key: TURBINIA_CONF - - name: TURBINIA_EXTRA_ARGS - value: "-d" - - name: NODE_NAME - valueFrom: - fieldRef: - fieldPath: spec.nodeName - volumeMounts: - - mountPath: "/var/run/lock" - name: lockfolder - readOnly: false - - mountPath: /mnt/turbiniavolume - name: turbiniavolume - ports: - - containerPort: 9200 - resources: - requests: - memory: "256Mi" - cpu: "500m" - limits: - memory: "8192Mi" - cpu: "32000m" - volumes: - - name: lockfolder - hostPath: - path: /var/run/lock - - name: turbiniavolume - persistentVolumeClaim: - claimName: turbiniavolume-claim - readOnly: false diff --git a/k8s/common/turbinia-server-metrics-service.yaml b/k8s/common/turbinia-server-metrics-service.yaml deleted file mode 100644 index 8c58c62b9..000000000 --- a/k8s/common/turbinia-server-metrics-service.yaml +++ /dev/null @@ -1,12 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: turbinia-server-metrics - labels: - app: turbinia-server-metrics -spec: - ports: - - port: 9200 - targetPort: 9200 - selector: - app: turbinia-server \ No newline at end of file diff --git a/k8s/common/turbinia-server.yaml b/k8s/common/turbinia-server.yaml deleted file mode 100644 index 7d83cb15d..000000000 --- a/k8s/common/turbinia-server.yaml +++ /dev/null @@ -1,59 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: turbinia-server - labels: - app: turbinia-server -spec: - replicas: 1 - selector: - matchLabels: - app: turbinia-server - template: - metadata: - annotations: - prometheus.io/port: "9200" - prometheus.io/scrape: "true" - labels: - app: turbinia-server - spec: - serviceAccountName: turbinia - initContainers: - - name: init-filestore - image: busybox:1.28 - command: ['sh', '-c', 'chmod go+w /mnt/turbiniavolume'] - volumeMounts: - - mountPath: "/mnt/turbiniavolume" - name: turbiniavolume - containers: - - name: server - image: us-docker.pkg.dev/osdfir-registry/turbinia/release/turbinia-server:latest - env: - - name: TURBINIA_CONF - valueFrom: - configMapKeyRef: - name: turbinia-config - key: TURBINIA_CONF - - name: TURBINIA_EXTRA_ARGS - value: "-d" - - name: NODE_NAME - valueFrom: - fieldRef: - fieldPath: spec.nodeName - volumeMounts: - - mountPath: /mnt/turbiniavolume - name: turbiniavolume - ports: - - containerPort: 9200 - resources: - requests: - memory: "256Mi" - cpu: "500m" - limits: - memory: "8192Mi" - cpu: "4000m" - volumes: - - name: turbiniavolume - persistentVolumeClaim: - claimName: turbiniavolume-claim - readOnly: false \ No newline at end of file diff --git a/k8s/common/turbinia-volume-claim-filestore.yaml b/k8s/common/turbinia-volume-claim-filestore.yaml deleted file mode 100644 index 1fd326cc6..000000000 --- a/k8s/common/turbinia-volume-claim-filestore.yaml +++ /dev/null @@ -1,12 +0,0 @@ -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: turbiniavolume-claim -spec: - accessModes: - - ReadWriteMany - storageClassName: "" - volumeName: turbiniavolume - resources: - requests: - storage: 1T \ No newline at end of file diff --git a/k8s/common/turbinia-volume-filestore.yaml b/k8s/common/turbinia-volume-filestore.yaml deleted file mode 100644 index 1b95ef1ac..000000000 --- a/k8s/common/turbinia-volume-filestore.yaml +++ /dev/null @@ -1,12 +0,0 @@ -apiVersion: v1 -kind: PersistentVolume -metadata: - name: turbiniavolume -spec: - capacity: - storage: 1T - accessModes: - - ReadWriteMany - nfs: - path: /turbiniavolume - server: \ No newline at end of file diff --git a/k8s/common/turbinia-worker-metrics-service.yaml b/k8s/common/turbinia-worker-metrics-service.yaml deleted file mode 100644 index 6dd22adf9..000000000 --- a/k8s/common/turbinia-worker-metrics-service.yaml +++ /dev/null @@ -1,12 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: turbinia-worker-metrics - labels: - app: turbinia-worker-metrics -spec: - ports: - - port: 9200 - targetPort: 9200 - selector: - app: turbinia-worker \ No newline at end of file diff --git a/k8s/common/turbinia-worker.yaml b/k8s/common/turbinia-worker.yaml deleted file mode 100644 index 962613f8a..000000000 --- a/k8s/common/turbinia-worker.yaml +++ /dev/null @@ -1,83 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: turbinia-worker - labels: - app: turbinia-worker -spec: - replicas: 5 - selector: - matchLabels: - app: turbinia-worker - template: - metadata: - annotations: - prometheus.io/port: "9200" - prometheus.io/scrape: "true" - labels: - app: turbinia-worker - spec: - serviceAccountName: turbinia - # The grace period needs to be set to the largest task timeout as - # set in the turbinia configuration file. - terminationGracePeriodSeconds: 86400 - initContainers: - - name: init-filestore - image: busybox:1.28 - command: ['sh', '-c', 'chmod go+w /mnt/turbiniavolume'] - volumeMounts: - - mountPath: "/mnt/turbiniavolume" - name: turbiniavolume - containers: - - name: worker - image: us-docker.pkg.dev/osdfir-registry/turbinia/release/turbinia-worker:latest - lifecycle: - preStop: - exec: - command: - - "/bin/sh" - - "-c" - - "touch /tmp/turbinia-to-scaledown.lock && sleep 5 && /usr/bin/python3 /home/turbinia/check-lockfile.py" - securityContext: - privileged: true - env: - - name: TURBINIA_CONF - valueFrom: - configMapKeyRef: - name: turbinia-config - key: TURBINIA_CONF - - name: TURBINIA_EXTRA_ARGS - value: "-d" - - name: NODE_NAME - valueFrom: - fieldRef: - fieldPath: spec.nodeName - volumeMounts: - - mountPath: "/dev" - name: dev - readOnly: true - - mountPath: "/var/run/lock" - name: lockfolder - readOnly: false - - mountPath: /mnt/turbiniavolume - name: turbiniavolume - ports: - - containerPort: 9200 - resources: - requests: - memory: "2048Mi" - cpu: "1500m" - limits: - memory: "65536Mi" - cpu: "31000m" - volumes: - - name: dev - hostPath: - path: /dev - - name: lockfolder - hostPath: - path: /var/run/lock - - name: turbiniavolume - persistentVolumeClaim: - claimName: turbiniavolume-claim - readOnly: false diff --git a/k8s/dfdewey/dfdewey-volume-claim-filestore.yaml b/k8s/dfdewey/dfdewey-volume-claim-filestore.yaml deleted file mode 100644 index f3331a89c..000000000 --- a/k8s/dfdewey/dfdewey-volume-claim-filestore.yaml +++ /dev/null @@ -1,12 +0,0 @@ -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: dfdeweyvolume-claim -spec: - accessModes: - - ReadWriteMany - storageClassName: "" - volumeName: dfdeweyvolume - resources: - requests: - storage: diff --git a/k8s/dfdewey/dfdewey-volume-filestore.yaml b/k8s/dfdewey/dfdewey-volume-filestore.yaml deleted file mode 100644 index 0a4deed54..000000000 --- a/k8s/dfdewey/dfdewey-volume-filestore.yaml +++ /dev/null @@ -1,12 +0,0 @@ -apiVersion: v1 -kind: PersistentVolume -metadata: - name: dfdeweyvolume -spec: - capacity: - storage: - accessModes: - - ReadWriteMany - nfs: - path: / - server: diff --git a/k8s/dfdewey/opensearch-configmap.yaml b/k8s/dfdewey/opensearch-configmap.yaml deleted file mode 100644 index 158b31b47..000000000 --- a/k8s/dfdewey/opensearch-configmap.yaml +++ /dev/null @@ -1,11 +0,0 @@ -apiVersion: v1 -kind: ConfigMap -metadata: - name: opensearch-config - labels: - app: opensearch -data: - discovery.type: single-node - plugins.security.disabled: "true" - OPENSEARCH_JAVA_OPTS: -Xms32g -Xmx32g - network.host: 0.0.0.0 diff --git a/k8s/dfdewey/opensearch-server.yaml b/k8s/dfdewey/opensearch-server.yaml deleted file mode 100644 index 626414b0c..000000000 --- a/k8s/dfdewey/opensearch-server.yaml +++ /dev/null @@ -1,48 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: dfdewey-opensearch -spec: - replicas: 1 - selector: - matchLabels: - app: dfdewey-opensearch - template: - metadata: - labels: - app: dfdewey-opensearch - spec: - securityContext: - fsGroup: 1000 - containers: - - name: opensearch - image: opensearchproject/opensearch:latest - resources: - requests: - memory: "32Gi" - ports: - - containerPort: 9200 - envFrom: - - configMapRef: - name: opensearch-config - volumeMounts: - - mountPath: /usr/share/opensearch/data - name: dfdeweyvolume - subPath: - initContainers: - - name: opensearch-init - image: busybox:latest - volumeMounts: - - mountPath: /usr/share/opensearch/data - name: dfdeweyvolume - subPath: - securityContext: - privileged: true - command: ['sh', '-c', "chown -R 1000:1000 /usr/share/opensearch/data; sysctl -w vm.max_map_count=262144; sysctl -p"] - volumes: - - name: dfdeweyvolume - persistentVolumeClaim: - claimName: dfdeweyvolume-claim - readOnly: false - nodeSelector: - cloud.google.com/gke-nodepool: default-pool diff --git a/k8s/dfdewey/opensearch-service.yaml b/k8s/dfdewey/opensearch-service.yaml deleted file mode 100644 index 41cd80503..000000000 --- a/k8s/dfdewey/opensearch-service.yaml +++ /dev/null @@ -1,12 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: dfdewey-opensearch - labels: - app: dfdewey-opensearch -spec: - type: NodePort - ports: - - port: 9200 - selector: - app: dfdewey-opensearch diff --git a/k8s/dfdewey/postgres-configmap.yaml b/k8s/dfdewey/postgres-configmap.yaml deleted file mode 100644 index 5bce99792..000000000 --- a/k8s/dfdewey/postgres-configmap.yaml +++ /dev/null @@ -1,10 +0,0 @@ -apiVersion: v1 -kind: ConfigMap -metadata: - name: postgres-config - labels: - app: postgres -data: - PGDATA: /var/lib/postgresql/data/dfdewey/ - POSTGRES_USER: dfdewey - POSTGRES_PASSWORD: password diff --git a/k8s/dfdewey/postgres-server.yaml b/k8s/dfdewey/postgres-server.yaml deleted file mode 100644 index 89ee4eb22..000000000 --- a/k8s/dfdewey/postgres-server.yaml +++ /dev/null @@ -1,33 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: dfdewey-postgres -spec: - replicas: 1 - selector: - matchLabels: - app: dfdewey-postgres - template: - metadata: - labels: - app: dfdewey-postgres - spec: - containers: - - name: postgres - image: postgres:latest - ports: - - containerPort: 5432 - envFrom: - - configMapRef: - name: postgres-config - volumeMounts: - - mountPath: /var/lib/postgresql/data - name: dfdeweyvolume - subPath: - volumes: - - name: dfdeweyvolume - persistentVolumeClaim: - claimName: dfdeweyvolume-claim - readOnly: false - nodeSelector: - cloud.google.com/gke-nodepool: default-pool diff --git a/k8s/dfdewey/postgres-service.yaml b/k8s/dfdewey/postgres-service.yaml deleted file mode 100644 index 396ad5759..000000000 --- a/k8s/dfdewey/postgres-service.yaml +++ /dev/null @@ -1,12 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: dfdewey-postgres - labels: - app: dfdewey-postgres -spec: - type: NodePort - ports: - - port: 5432 - selector: - app: dfdewey-postgres diff --git a/k8s/dfdewey/setup-dfdewey.sh b/k8s/dfdewey/setup-dfdewey.sh deleted file mode 100755 index 50bddb234..000000000 --- a/k8s/dfdewey/setup-dfdewey.sh +++ /dev/null @@ -1,40 +0,0 @@ -#!/bin/sh -# Turbinia dfDewey GKE deployment script -# This script can be used to deploy dfDewey to Turbinia in GKE. -# Requirements: -# - have 'gcloud' installed. -# - authenticate against your GKE cluster with "gcloud container clusters get-credentials" - -echo "Deploying dfDewey datastores" -TURBINIA_CONF=$1 -if [ -z $1 ]; then - echo "No config found as parameter, please specify a Turbinia config file." - exit 0 -fi - -kubectl create -f dfdewey-volume-filestore.yaml -kubectl create -f dfdewey-volume-claim-filestore.yaml - -# PostgreSQL -kubectl create -f postgres-configmap.yaml -kubectl create -f postgres-server.yaml -kubectl create -f postgres-service.yaml - -# Opensearch -kubectl create -f opensearch-configmap.yaml -kubectl create -f opensearch-server.yaml -kubectl create -f opensearch-service.yaml - -# Update Turbinia config -DFDEWEY_PG_IP=$(kubectl get -o jsonpath='{.spec.clusterIP}' service dfdewey-postgres) -DFDEWEY_OS_IP=$(kubectl get -o jsonpath='{.spec.clusterIP}' service dfdewey-opensearch) -sed -i -e "s/^DFDEWEY_PG_HOST = .*$/DFDEWEY_PG_HOST = \'$DFDEWEY_PG_IP\'/g" $TURBINIA_CONF -sed -i -e "s/^DFDEWEY_OS_HOST = .*$/DFDEWEY_OS_HOST = \'$DFDEWEY_OS_IP\'/g" $TURBINIA_CONF -base64 -w0 $TURBINIA_CONF > turbinia-config.b64 -kubectl create configmap turbinia-config --from-file=TURBINIA_CONF=turbinia-config.b64 --dry-run=client -o yaml | kubectl apply -f - - -# Restart server and worker -kubectl rollout restart -f turbinia-server.yaml -kubectl rollout restart -f turbinia-worker.yaml - -echo "dfDewey datastore deployment complete" diff --git a/k8s/gcp-pubsub/destroy-pubsub.sh b/k8s/gcp-pubsub/destroy-pubsub.sh deleted file mode 100755 index 691621d6b..000000000 --- a/k8s/gcp-pubsub/destroy-pubsub.sh +++ /dev/null @@ -1,17 +0,0 @@ -#!/bin/sh -# Turbinia GKE Pubsub destroy script. -# This script can be used to destroy the Turbinia Pubsub deployment in GKE. -# Please use the destroy-pubsub-gke.sh script if you'd like to also delete -# the cluster and other GCP resources created as part of the deployment. -# Requirements: -# - have 'gcloud' and 'kubectl' installed. -# - authenticate against your GKE cluster with "gcloud container clusters get-credentials" - -kubectl delete configmap turbinia-config -kubectl delete -f turbinia-autoscale-cpu.yaml -kubectl delete -f turbinia-server-metrics-service.yaml -kubectl delete -f turbinia-worker-metrics-service.yaml -kubectl delete -f turbinia-worker.yaml -kubectl delete -f turbinia-server.yaml -kubectl delete -f turbinia-volume-claim-filestore.yaml -kubectl delete -f turbinia-volume-filestore.yaml \ No newline at end of file diff --git a/k8s/gcp-pubsub/setup-pubsub.sh b/k8s/gcp-pubsub/setup-pubsub.sh deleted file mode 100755 index 6ec149b6b..000000000 --- a/k8s/gcp-pubsub/setup-pubsub.sh +++ /dev/null @@ -1,26 +0,0 @@ -#!/bin/sh -# Turbinia GKE Pubsub deployment script. -# This script can be used to deploy Turbinia configured with Pubsub to GKE. -# Please use the deploy-celery-gke.sh script if you'd also like to create -# the GKE cluster and associated GCP resources required by Turbinia. -# Requirements: -# - have 'gcloud' and 'kubectl' installed. -# - authenticate against your GKE cluster with "gcloud container clusters get-credentials" - -TURBINIA_CONF=$1 -if [ -z $1 ]; then - echo "No config found as parameter, please specify a Turbinia config file." - exit 0 -fi - -base64 -w0 $TURBINIA_CONF > turbinia-config.b64 -kubectl create configmap turbinia-config --from-file=TURBINIA_CONF=turbinia-config.b64 -kubectl create -f turbinia-volume-filestore.yaml -kubectl create -f turbinia-volume-claim-filestore.yaml -kubectl create -f turbinia-server-metrics-service.yaml -kubectl create -f turbinia-worker-metrics-service.yaml -kubectl create -f turbinia-server.yaml -kubectl create -f turbinia-worker.yaml -kubectl create -f turbinia-autoscale-cpu.yaml - -echo "Turbinia deployment complete" diff --git a/k8s/tools/.clusterconfig b/k8s/tools/.clusterconfig deleted file mode 100644 index 3a5cdf9e6..000000000 --- a/k8s/tools/.clusterconfig +++ /dev/null @@ -1,52 +0,0 @@ -#!/bin/bash -# Turbinia parameters for Google Cloud Kubernetes deployment. Please review -# the default configuration and update if necessary based on billing restrictions -# or quota requirements. - -# A unique ID per Turbinia instance. Used for keeping multiple Turbinia instances -# seperate and to name newly created GCP resources such as the cluster name. -INSTANCE_ID='turbinia-main' - -# The Turbinia config name -TURBINIA_CONFIG='.turbiniarc' - -# Folder to save configured Deployment files to. Please use an absolute path else -# the directory will default to within the root of the Turbinia k8s folder. -DEPLOYMENT_FOLDER="deployment/$INSTANCE_ID" - -# The region and zone where Turbinia will run. Note that Turbinia does -# not currently support multi-zone operation. -ZONE='us-central1-f' -REGION='us-central1' -DATASTORE_REGION='us-central' - -# VPC network to configure the cluster in. -VPC_NETWORK='default' - -# Control plane IP range for the control pane VPC. Due to the Turbinia -# cluster being private, this is required for the control pane and cluster -# to communicate privately. -VPC_CONTROL_PANE='172.16.0.0/28' # Set to default - -# The cluster name, number of minimum and maximum nodes, machine type and disk -# size of the deployed cluster and nodes within it. -CLUSTER_NAME=$INSTANCE_ID -CLUSTER_MIN_NODE_SIZE='1' -CLUSTER_MAX_NODE_SIZE='20' -CLUSTER_MACHINE_TYPE='e2-standard-32' -CLUSTER_MACHINE_SIZE='200' - -# The Filestore share name, and size. Filestore will be used to retain shared output -# from Turbinia, such as logs. Please specify size in terabytes(TB). -FILESTORE_NAME='turbiniavolume' -FILESTORE_CAPACITY='10T' - -# Filestore share names, and sizes for dfDewey datastores (if deployed). -FILESTORE_DFDEWEY_NAME='dfdeweyvolume' -FILESTORE_DFDEWEY_CAPACITY='6T' -FILESTORE_PG_PATH='postgres' -FILESTORE_OS_PATH='opensearch' - -# Any Jobs added to this list will be disabled by default at start-up. Job names -# entered here are case insensitive, but must be quoted. -DISABLED_JOBS="['BinaryExtractorJob', 'BulkExtractorJob', 'HindsightJob', 'PhotorecJob', 'VolatilityJob']" \ No newline at end of file diff --git a/k8s/tools/deploy-celery-gke.sh b/k8s/tools/deploy-celery-gke.sh deleted file mode 100755 index d56ed2ed7..000000000 --- a/k8s/tools/deploy-celery-gke.sh +++ /dev/null @@ -1,257 +0,0 @@ -#!/bin/bash -# Turbinia GKE deployment script. -# This script can be used to deploy the Turbinia Celery stack to GKE. -# Requirements: -# - have 'gcloud' and 'kubectl' installed. -# - authenticate against your GCP project with "gcloud auth login" -# - account being used to run script should have an IAM policy of instance.admin and container.admin used to create the necessary resources. -# - optionally have the GCP project set with "gcloud config set project [you-project-name]" -# -# Use --help to show you commands supported. - -set -o posix -set -e - -# Source cluster config to pull specs to create cluster from. Please review -# the config file and make any necessary changes prior to executing this script -DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )" -source $DIR/.clusterconfig -cd $DIR/.. - -if [[ "$*" == *--help || "$*" == *-h ]] ; then - echo "Turbinia deployment script for Kubernetes environment" - echo "Options:" - echo "--build-dev Deploy Turbinia development docker image" - echo "--build-experimental Deploy Turbinia experimental docker image" - echo "--no-cluster Do not create the cluster" - echo "--no-filestore Do not deploy Turbinia Filestore" - echo "--no-node-autoscale Do not enable Node autoscaling" - echo "--deploy-controller Deploy Turbinia controller for load testing and troubleshooting" - echo "--deploy-dfdewey Deploy dfDewey datastores" - exit 1 -fi - -# Check if gcloud is installed -if [[ -z "$( which gcloud )" ]] ; then - echo "gcloud CLI not found. Please follow the instructions at " - echo "https://cloud.google.com/sdk/docs/install to install the gcloud " - echo "package first." - exit 1 -fi - -# Check if kubectl is installed -if [[ -z "$( which kubectl )" ]] ; then - echo "kubectl CLI not found. Please follow the instructions at " - echo "https://kubernetes.io/docs/tasks/tools/ to install the kubectl " - echo "package first." - exit 1 -fi - -# Check configured gcloud project -if [[ -z "$DEVSHELL_PROJECT_ID" ]] ; then - DEVSHELL_PROJECT_ID=$(gcloud config get-value project) - ERRMSG="ERROR: Could not get configured project. Please either restart " - ERRMSG+="Google Cloudshell, or set configured project with " - ERRMSG+="'gcloud config set project PROJECT' when running outside of Cloudshell." - if [[ -z "$DEVSHELL_PROJECT_ID" ]] ; then - echo $ERRMSG - exit 1 - fi - echo "Environment variable \$DEVSHELL_PROJECT_ID was not set at start time " - echo "so attempting to get project config from gcloud config." - echo -n "Do you want to use $DEVSHELL_PROJECT_ID as the target project? (y / n) > " - read response - if [[ $response != "y" && $response != "Y" ]] ; then - echo $ERRMSG - exit 1 - fi -fi - -# TODO: Do real check to make sure credentials have adequate roles -if [[ $( gcloud -q --project $DEVSHELL_PROJECT_ID auth list --filter="status:ACTIVE" --format="value(account)" | wc -l ) -eq 0 ]] ; then - echo "No gcloud credentials found. Use 'gcloud auth login' and 'gcloud auth application-default login' to log in" - exit 1 -fi - -# Enable IAM services -gcloud -q --project $DEVSHELL_PROJECT_ID services enable iam.googleapis.com - -# Create Turbinia service account with necessary IAM roles. The service account will be used at -# container runtime in order to have the necessary permissions to attach and detach GCP disks as -# well as write to stackdriver logging and error reporting. -SA_NAME="turbinia" -SA_MEMBER="serviceAccount:$SA_NAME@$DEVSHELL_PROJECT_ID.iam.gserviceaccount.com" -if [[ -z "$(gcloud -q --project $DEVSHELL_PROJECT_ID iam service-accounts list --format='value(name)' --filter=name:/$SA_NAME@)" ]] ; then - gcloud --project $DEVSHELL_PROJECT_ID iam service-accounts create "${SA_NAME}" --display-name "${SA_NAME}" - # Grant IAM roles to the service account - echo "Grant permissions on service account" - gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID --member=$SA_MEMBER --role='roles/compute.instanceAdmin' - gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID --member=$SA_MEMBER --role='roles/logging.logWriter' - gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID --member=$SA_MEMBER --role='roles/errorreporting.writer' - gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID --member=$SA_MEMBER --role='roles/iam.serviceAccountUser' -fi - -echo "Enabling Compute API" -gcloud -q --project $DEVSHELL_PROJECT_ID services enable compute.googleapis.com - -# Check if the configured VPC network exists. -networks=$(gcloud -q --project $DEVSHELL_PROJECT_ID compute networks list --filter="name=$VPC_NETWORK" |wc -l) -if [[ "${networks}" -lt "2" ]]; then - echo "ERROR: VPC network $VPC_NETWORK not found, please create this first." - exit 1 -fi - -# Update Docker image if flag was provided else use default -if [[ "$*" == *--build-dev* ]] ; then - TURBINIA_SERVER_IMAGE="us-docker.pkg.dev\/osdfir-registry\/turbinia\/release\/turbinia-server-dev:latest" - TURBINIA_WORKER_IMAGE="us-docker.pkg.dev\/osdfir-registry\/turbinia\/release\/turbinia-worker-dev:latest" -elif [[ "$*" == *--build-experimental* ]] ; then - TURBINIA_SERVER_IMAGE="us-docker.pkg.dev\/osdfir-registry\/turbinia\/release\/turbinia-server-experimental:latest" - TURBINIA_WORKER_IMAGE="us-docker.pkg.dev\/osdfir-registry\/turbinia\/release\/turbinia-worker-experimental:latest" -fi - -echo "Setting docker image to $TURBINIA_SERVER_IMAGE and $TURBINIA_WORKER_IMAGE" -echo "Deploying cluster to project $DEVSHELL_PROJECT_ID" - -# Setup appropriate directories and copy of deployment templates and Turbinia config -echo "Copying over template deployment files to $DEPLOYMENT_FOLDER" -mkdir -p $DEPLOYMENT_FOLDER -cp common/* $DEPLOYMENT_FOLDER -cp celery/* $DEPLOYMENT_FOLDER -if [[ "$*" == *--deploy-dfdewey* ]] ; then - cp dfdewey/* $DEPLOYMENT_FOLDER -fi -cp ../turbinia/config/turbinia_config_tmpl.py $DEPLOYMENT_FOLDER/$TURBINIA_CONFIG - -# Create GKE cluster and authenticate to it -if [[ "$*" != *--no-cluster* ]] ; then - echo "Enabling Container API" - gcloud -q --project $DEVSHELL_PROJECT_ID services enable container.googleapis.com - if [[ "$*" != *--no-node-autoscale* ]] ; then - echo "Creating cluster $CLUSTER_NAME with a minimum node size of $CLUSTER_MIN_NODE_SIZE to scale up to a maximum node size of $CLUSTER_MAX_NODE_SIZE. Each node will be configured with a machine type $CLUSTER_MACHINE_TYPE and disk size of $CLUSTER_MACHINE_SIZE" - gcloud -q --project $DEVSHELL_PROJECT_ID container clusters create $CLUSTER_NAME --machine-type $CLUSTER_MACHINE_TYPE --disk-size $CLUSTER_MACHINE_SIZE --num-nodes $CLUSTER_MIN_NODE_SIZE --master-ipv4-cidr $VPC_CONTROL_PANE --network $VPC_NETWORK --zone $ZONE --shielded-secure-boot --shielded-integrity-monitoring --no-enable-master-authorized-networks --enable-private-nodes --enable-ip-alias --scopes "https://www.googleapis.com/auth/cloud-platform" --labels "turbinia-infra=true" --workload-pool=$DEVSHELL_PROJECT_ID.svc.id.goog --default-max-pods-per-node=20 --enable-autoscaling --min-nodes=$CLUSTER_MIN_NODE_SIZE --max-nodes=$CLUSTER_MAX_NODE_SIZE - else - echo "--no-node-autoscale specified. Node size will remain constant at $CLUSTER_MIN_NODE_SIZE node(s)" - echo "Creating cluster $CLUSTER_NAME with a node size of $CLUSTER_MIN_NODE_SIZE. Each node will be configured with a machine type $CLUSTER_MACHINE_TYPE and disk size of $CLUSTER_MACHINE_SIZE" - gcloud -q --project $DEVSHELL_PROJECT_ID container clusters create $CLUSTER_NAME --machine-type $CLUSTER_MACHINE_TYPE --disk-size $CLUSTER_MACHINE_SIZE --num-nodes $CLUSTER_MIN_NODE_SIZE --master-ipv4-cidr $VPC_CONTROL_PANE --network $VPC_NETWORK --zone $ZONE --shielded-secure-boot --shielded-integrity-monitoring --no-enable-master-authorized-networks --enable-private-nodes --enable-ip-alias --scopes "https://www.googleapis.com/auth/cloud-platform" --labels "turbinia-infra=true" --workload-pool=$DEVSHELL_PROJECT_ID.svc.id.goog --default-max-pods-per-node=20 - fi -else - echo "--no-cluster specified. Authenticating to pre-existing cluster $CLUSTER_NAME" -fi - -# Authenticate to cluster -gcloud -q --project $DEVSHELL_PROJECT_ID container clusters get-credentials $CLUSTER_NAME --zone $ZONE -# Create Kubernetes service account -kubectl get serviceaccounts $SA_NAME || kubectl create serviceaccount $SA_NAME --namespace default -gcloud iam service-accounts add-iam-policy-binding $SA_NAME@$DEVSHELL_PROJECT_ID.iam.gserviceaccount.com --role roles/iam.workloadIdentityUser --member "serviceAccount:$DEVSHELL_PROJECT_ID.svc.id.goog[default/$SA_NAME]" -kubectl annotate serviceaccount $SA_NAME --overwrite --namespace default iam.gke.io/gcp-service-account=$SA_NAME@$DEVSHELL_PROJECT_ID.iam.gserviceaccount.com - -# Go to deployment folder to make changes files -cd $DEPLOYMENT_FOLDER - -# Add service account to deployments -sed -i -e "s/serviceAccountName: .*/serviceAccountName: $SA_NAME/g" turbinia-server.yaml turbinia-worker.yaml redis-server.yaml - -# Update Turbinia config with project info -echo "Updating $TURBINIA_CONFIG config with project info" -sed -i -e "s/^INSTANCE_ID = .*$/INSTANCE_ID = '$INSTANCE_ID'/g" $TURBINIA_CONFIG -sed -i -e "s/^TURBINIA_PROJECT = .*$/TURBINIA_PROJECT = '$DEVSHELL_PROJECT_ID'/g" $TURBINIA_CONFIG -sed -i -e "s/^TURBINIA_ZONE = .*$/TURBINIA_ZONE = '$ZONE'/g" $TURBINIA_CONFIG -sed -i -e "s/^TURBINIA_REGION = .*$/TURBINIA_REGION = '$REGION'/g" $TURBINIA_CONFIG -sed -i -e "s/^CLOUD_PROVIDER = .*$/CLOUD_PROVIDER = 'GCP'/g" $TURBINIA_CONFIG - -# Create File Store instance and update deployment files with created instance -if [[ "$*" != *--no-filestore* ]] ; then - echo "Enabling GCP Filestore API" - gcloud -q --project $DEVSHELL_PROJECT_ID services enable file.googleapis.com - echo "Creating Filestore instance $FILESTORE_NAME with capacity $FILESTORE_CAPACITY" - gcloud -q --project $DEVSHELL_PROJECT_ID filestore instances create $FILESTORE_NAME --file-share=name=$FILESTORE_NAME,capacity=$FILESTORE_CAPACITY --zone=$ZONE --network=name=$VPC_NETWORK -else - echo "Using pre existing Filestore instance $FILESTORE_NAME with capacity $FILESTORE_CAPACITY" -fi - -echo "Updating $TURBINIA_CONFIG config with Filestore configuration and setting output directories" -FILESTORE_IP=$(gcloud -q --project $DEVSHELL_PROJECT_ID filestore instances describe $FILESTORE_NAME --zone=$ZONE --format='value(networks.ipAddresses)' --flatten="networks[].ipAddresses[]") -FILESTORE_LOGS="'\/mnt\/$FILESTORE_NAME\/logs'" -FILESTORE_OUTPUT="'\/mnt\/$FILESTORE_NAME\/output'" -sed -i -e "s//$FILESTORE_IP/g" turbinia-volume-filestore.yaml -sed -i -e "s/turbiniavolume/$FILESTORE_NAME/g" turbinia-volume-filestore.yaml turbinia-volume-claim-filestore.yaml turbinia-server.yaml turbinia-worker.yaml redis-server.yaml -sed -i -e "s/storage: .*/storage: $FILESTORE_CAPACITY/g" turbinia-volume-filestore.yaml turbinia-volume-claim-filestore.yaml -sed -i -e "s/^LOG_DIR = .*$/LOG_DIR = $FILESTORE_LOGS/g" $TURBINIA_CONFIG -sed -i -e "s/^MOUNT_DIR_PREFIX = .*$/MOUNT_DIR_PREFIX = '\/mnt\/turbinia'/g" $TURBINIA_CONFIG -sed -i -e "s/^SHARED_FILESYSTEM = .*$/SHARED_FILESYSTEM = True/g" $TURBINIA_CONFIG -sed -i -e "s/^OUTPUT_DIR = .*$/OUTPUT_DIR = $FILESTORE_OUTPUT/g" $TURBINIA_CONFIG - -# Update Turbinia config with Redis/Celery parameters -echo "Updating $TURBINIA_CONFIG with Redis/Celery config" -sed -i -e "s/^TASK_MANAGER = .*$/TASK_MANAGER = 'Celery'/g" $TURBINIA_CONFIG -sed -i -e "s/^STATE_MANAGER = .*$/STATE_MANAGER = 'Redis'/g" $TURBINIA_CONFIG -sed -i -e "s/^REDIS_HOST = .*$/REDIS_HOST = 'redis.default.svc.cluster.local'/g" $TURBINIA_CONFIG -sed -i -e "s/^DEBUG_TASKS = .*$/DEBUG_TASKS = True/g" $TURBINIA_CONFIG - -# Enable Stackdriver Logging and Stackdriver Traceback -echo "Enabling Cloud Error Reporting and Logging APIs" -gcloud -q --project $DEVSHELL_PROJECT_ID services enable clouderrorreporting.googleapis.com -gcloud -q --project $DEVSHELL_PROJECT_ID services enable logging.googleapis.com -echo "Updating $TURBINIA_CONFIG to enable Stackdriver Traceback and Logging" -sed -i -e "s/^STACKDRIVER_LOGGING = .*$/STACKDRIVER_LOGGING = True/g" $TURBINIA_CONFIG -sed -i -e "s/^STACKDRIVER_TRACEBACK = .*$/STACKDRIVER_TRACEBACK = True/g" $TURBINIA_CONFIG - -# Enable Prometheus -echo "Updating $TURBINIA_CONFIG to enable Prometheus application metrics" -sed -i -e "s/^PROMETHEUS_ENABLED = .*$/PROMETHEUS_ENABLED = True/g" $TURBINIA_CONFIG - -# Disable some jobs -echo "Updating $TURBINIA_CONFIG with disabled jobs" -sed -i -e "s/^DISABLED_JOBS = .*$/DISABLED_JOBS = $DISABLED_JOBS/g" $TURBINIA_CONFIG - -# Set appropriate docker image in deployment file if user specified -if [[ ! -z "$TURBINIA_SERVER_IMAGE" && ! -z "$TURBINIA_WORKER_IMAGE" ]] ; then - echo "Updating deployment files with docker image $TURBINIA_SERVER_IMAGE and $TURBINIA_WORKER_IMAGE" - sed -i -e "s/us-docker.pkg.dev\/osdfir-registry\/turbinia\/release\/turbinia-server:latest$/$TURBINIA_SERVER_IMAGE/g" turbinia-server.yaml - sed -i -e "s/us-docker.pkg.dev\/osdfir-registry\/turbinia\/release\/turbinia-worker:latest$/$TURBINIA_WORKER_IMAGE/g" turbinia-worker.yaml -fi - -# Deploy to cluster -echo "Deploying Turbinia to $CLUSTER_NAME cluster" -./setup-celery.sh $TURBINIA_CONFIG - -# Deploy Turbinia Controller -if [[ "$*" == *--deploy-controller* ]] ; then - echo "--deploy-controller specified. Deploying Turbinia controller." - kubectl create -f turbinia-controller.yaml -fi - -# Deploy dfDewey -if [[ "$*" == *--deploy-dfdewey* ]] ; then - echo "Deploying dfDewey datastores to $CLUSTER_NAME cluster" - if [[ -z "$(gcloud -q --project $DEVSHELL_PROJECT_ID filestore instances list --format='value(name)' --filter=name:$FILESTORE_DFDEWEY_NAME)" ]] ; then - echo "Creating Filestore instance $FILESTORE_DFDEWEY_NAME with capacity $FILESTORE_DFDEWEY_CAPACITY" - gcloud -q --project $DEVSHELL_PROJECT_ID filestore instances create $FILESTORE_DFDEWEY_NAME --file-share=name=$FILESTORE_DFDEWEY_NAME,capacity=$FILESTORE_DFDEWEY_CAPACITY --zone=$ZONE --network=name=$VPC_NETWORK - else - echo "Using pre existing Filestore instance $FILESTORE_DFDEWEY_NAME with capacity $FILESTORE_DFDEWEY_CAPACITY" - fi - FILESTORE_DFDEWEY_IP=$(gcloud -q --project $DEVSHELL_PROJECT_ID filestore instances describe $FILESTORE_DFDEWEY_NAME --zone=$ZONE --format='value(networks.ipAddresses)' --flatten="networks[].ipAddresses[]") - sed -i -e "s//$FILESTORE_DFDEWEY_NAME/g" dfdewey-volume-filestore.yaml - sed -i -e "s//$FILESTORE_DFDEWEY_IP/g" dfdewey-volume-filestore.yaml - sed -i -e "s//$FILESTORE_DFDEWEY_CAPACITY/g" dfdewey-volume-filestore.yaml dfdewey-volume-claim-filestore.yaml - sed -i -e "s//$FILESTORE_PG_PATH/g" postgres-server.yaml - sed -i -e "s//$FILESTORE_OS_PATH/g" opensearch-server.yaml - - ./setup-dfdewey.sh $TURBINIA_CONFIG -fi - -# Create backup of turbinia config file if it exists -TURBINIA_OUT="$HOME/.turbiniarc" -if [[ -a $TURBINIA_OUT ]] ; then - backup_file="${TURBINIA_OUT}.$( date +%s )" - mv $TURBINIA_OUT $backup_file - echo "Backing up old Turbinia config $TURBINIA_CONFIG to $backup_file" -fi - -# Make a copy of Turbinia config in user home directory -echo "Creating a copy of Turbinia config in $TURBINIA_OUT" -cp $TURBINIA_CONFIG $TURBINIA_OUT - -echo "Turbinia GKE was succesfully deployed!" -echo "Authenticate via: gcloud container clusters get-credentials $CLUSTER_NAME --zone $ZONE" \ No newline at end of file diff --git a/k8s/tools/deploy-pubsub-gke.sh b/k8s/tools/deploy-pubsub-gke.sh deleted file mode 100755 index 875d4bcb6..000000000 --- a/k8s/tools/deploy-pubsub-gke.sh +++ /dev/null @@ -1,336 +0,0 @@ -#!/bin/bash -# Turbinia GKE deployment script -# This script can be used to deploy the Turbinia stack to GKE PubSub. -# Requirements: -# - have 'gcloud' and 'kubectl' installed. -# - authenticate against your GCP project with "gcloud auth login" -# - optionally have the GCP project set with "gcloud config set project [you-project-name]" -# -# Use --help to show you commands supported. - -set -o posix -set -e - -# Source cluster config to pull specs to create cluster from. Please review -# the config file and make any necessary changes prior to executing this script -DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )" -source $DIR/.clusterconfig -cd $DIR/.. - -if [[ "$*" == *--help ]] ; then - echo "Turbinia deployment script for Kubernetes environment" - echo "Options:" - echo "--build-dev Deploy Turbinia development docker image" - echo "--build-experimental Deploy Turbinia experimental docker image" - echo "--no-cloudfunctions Do not deploy Turbinia Cloud Functions" - echo "--no-appengine Do not enable App Engine" - echo "--no-datastore Do not configure Turbinia Datastore" - echo "--no-filestore Do not deploy Turbinia Filestore" - echo "--no-node-autoscale Do not enable Node autoscaling" - echo "--no-gcs Do not create a GCS bucket" - echo "--no-pubsub Do not create the PubSub and PSQ topic/subscription" - echo "--no-cluster Do not create the cluster" - echo "--deploy-controller Deploy Turbinia controller for load testing and troubleshooting" - echo "--deploy-dfdewey Deploy dfDewey datastores" - exit 1 -fi - -# Check if gcloud is installed -if [[ -z "$( which gcloud )" ]] ; then - echo "gcloud CLI not found. Please follow the instructions at " - echo "https://cloud.google.com/sdk/docs/install to install the gcloud " - echo "package first." - exit 1 -fi - -# Check if kubectl is installed -if [[ -z "$( which kubectl )" ]] ; then - echo "kubectl CLI not found. Please follow the instructions at " - echo "https://kubernetes.io/docs/tasks/tools/ to install the kubectl " - echo "package first." - exit 1 -fi - -# Check configured gcloud project -if [[ -z "$DEVSHELL_PROJECT_ID" ]] ; then - DEVSHELL_PROJECT_ID=$(gcloud config get-value project) - ERRMSG="ERROR: Could not get configured project. Please either restart " - ERRMSG+="Google Cloudshell, or set configured project with " - ERRMSG+="'gcloud config set project PROJECT' when running outside of Cloudshell." - if [[ -z "$DEVSHELL_PROJECT_ID" ]] ; then - echo $ERRMSG - exit 1 - fi - echo "Environment variable \$DEVSHELL_PROJECT_ID was not set at start time " - echo "so attempting to get project config from gcloud config." - echo -n "Do you want to use $DEVSHELL_PROJECT_ID as the target project? (y / n) > " - read response - if [[ $response != "y" && $response != "Y" ]] ; then - echo $ERRMSG - exit 1 - fi -fi - -# TODO: Do real check to make sure credentials have adequate roles -if [[ $( gcloud -q --project $DEVSHELL_PROJECT_ID auth list --filter="status:ACTIVE" --format="value(account)" | wc -l ) -eq 0 ]] ; then - echo "No gcloud credentials found. Use 'gcloud auth login' and 'gcloud auth application-default login' to log in" - exit 1 -fi - -# Enable IAM services -gcloud -q --project $DEVSHELL_PROJECT_ID services enable iam.googleapis.com - - -# Create Turbinia service account with necessary IAM roles. The service account will be used at -# container runtime in order to have the necessary permissions to attach and detach GCP disks, to -# access GCP Pubsub, Datastore, Cloud Functions, GCS, and to write logs to stackdriver logging and -# error reporting. -SA_NAME="turbinia" -SA_MEMBER="serviceAccount:$SA_NAME@$DEVSHELL_PROJECT_ID.iam.gserviceaccount.com" -if [[ -z "$(gcloud -q --project $DEVSHELL_PROJECT_ID iam service-accounts list --format='value(name)' --filter=name:/$SA_NAME@)" ]] ; then - gcloud --project $DEVSHELL_PROJECT_ID iam service-accounts create "${SA_NAME}" --display-name "${SA_NAME}" - # Grant IAM roles to the service account - echo "Grant permissions on service account" - gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID --member=$SA_MEMBER --role='roles/cloudfunctions.admin' - gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID --member=$SA_MEMBER --role='roles/editor' - gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID --member=$SA_MEMBER --role='roles/cloudsql.admin' - gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID --member=$SA_MEMBER --role='roles/datastore.indexAdmin' - gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID --member=$SA_MEMBER --role='roles/logging.logWriter' - gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID --member=$SA_MEMBER --role='roles/errorreporting.writer' - gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID --member=$SA_MEMBER --role='roles/pubsub.admin' - gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID --member=$SA_MEMBER --role='roles/servicemanagement.admin' - gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID --member=$SA_MEMBER --role='roles/storage.admin' - gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID --member=$SA_MEMBER --role='roles/compute.admin' -fi - -echo "Enabling Compute API" -gcloud -q --project $DEVSHELL_PROJECT_ID services enable compute.googleapis.com - -# Check if the configured VPC network exists. -networks=$(gcloud -q --project $DEVSHELL_PROJECT_ID compute networks list --filter="name=$VPC_NETWORK" |wc -l) -if [[ "${networks}" -lt "2" ]]; then - echo "ERROR: VPC network $VPC_NETWORK not found, please create this first." - exit 1 -fi - -# Update Docker image if flag was provided else use default -if [[ "$*" == *--build-dev* ]] ; then - TURBINIA_SERVER_IMAGE="us-docker.pkg.dev\/osdfir-registry\/turbinia\/release\/turbinia-server-dev:latest" - TURBINIA_WORKER_IMAGE="us-docker.pkg.dev\/osdfir-registry\/turbinia\/release\/turbinia-worker-dev:latest" -elif [[ "$*" == *--build-experimental* ]] ; then - TURBINIA_SERVER_IMAGE="us-docker.pkg.dev\/osdfir-registry\/turbinia\/release\/turbinia-server-experimental:latest" - TURBINIA_WORKER_IMAGE="us-docker.pkg.dev\/osdfir-registry\/turbinia\/release\/turbinia-worker-experimental:latest" -fi - -echo "Setting docker image to $TURBINIA_SERVER_IMAGE and $TURBINIA_WORKER_IMAGE" -echo "Deploying cluster to project $DEVSHELL_PROJECT_ID" - -# Setup appropriate directories and copy of deployment templates and Turbinia config -echo "Copying over template deployment files to $DEPLOYMENT_FOLDER" -mkdir -p $DEPLOYMENT_FOLDER -cp common/* $DEPLOYMENT_FOLDER -cp gcp-pubsub/* $DEPLOYMENT_FOLDER -if [[ "$*" == *--deploy-dfdewey* ]] ; then - cp dfdewey/* $DEPLOYMENT_FOLDER -fi -cp ../turbinia/config/turbinia_config_tmpl.py $DEPLOYMENT_FOLDER/$TURBINIA_CONFIG - -# Deploy cloud functions -if [[ "$*" != *--no-cloudfunctions* ]] ; then - echo "Deploying cloud functions" - gcloud -q --project $DEVSHELL_PROJECT_ID services enable cloudfunctions.googleapis.com - gcloud -q --project $DEVSHELL_PROJECT_ID services enable cloudbuild.googleapis.com - - # Deploying cloud functions is flaky. Retry until success. - while true; do - num_functions="$(gcloud -q --project $DEVSHELL_PROJECT_ID functions list | grep task | grep $REGION | wc -l)" - if [[ "${num_functions}" -eq "3" ]]; then - echo "All Cloud Functions deployed" - break - fi - gcloud -q --project $DEVSHELL_PROJECT_ID functions deploy gettasks --region $REGION --source ../tools/gcf_init/ --runtime nodejs14 --trigger-http --memory 256MB --timeout 60s - gcloud -q --project $DEVSHELL_PROJECT_ID functions deploy closetask --region $REGION --source ../tools/gcf_init/ --runtime nodejs14 --trigger-http --memory 256MB --timeout 60s - gcloud -q --project $DEVSHELL_PROJECT_ID functions deploy closetasks --region $REGION --source ../tools/gcf_init/ --runtime nodejs14 --trigger-http --memory 256MB --timeout 60s - done -fi - -# Deploy Datastore indexes -if [[ "$*" != *--no-datastore* ]] ; then - echo "Enabling Datastore API and deploying datastore index" - gcloud -q --project $DEVSHELL_PROJECT_ID services enable datastore.googleapis.com - # Enable App Engine - if [[ "$*" != *--no-appengine* ]] ; then - echo "Enabling App Engine" - gcloud -q --project $DEVSHELL_PROJECT_ID app create --region=$DATASTORE_REGION - fi - gcloud -q --project $DEVSHELL_PROJECT_ID datastore databases create --region=$DATASTORE_REGION - gcloud -q --project $DEVSHELL_PROJECT_ID datastore indexes create ../tools/gcf_init/index.yaml -fi - -# Create GKE cluster and authenticate to it -if [[ "$*" != *--no-cluster* ]] ; then - echo "Enabling Container API" - gcloud -q --project $DEVSHELL_PROJECT_ID services enable container.googleapis.com - if [[ "$*" != *--no-node-autoscale* ]] ; then - echo "Creating cluster $CLUSTER_NAME with a minimum node size of $CLUSTER_MIN_NODE_SIZE to scale up to a maximum node size of $CLUSTER_MAX_NODE_SIZE. Each node will be configured with a machine type $CLUSTER_MACHINE_TYPE and disk size of $CLUSTER_MACHINE_SIZE" - gcloud -q --project $DEVSHELL_PROJECT_ID container clusters create $CLUSTER_NAME --machine-type $CLUSTER_MACHINE_TYPE --disk-size $CLUSTER_MACHINE_SIZE --num-nodes $CLUSTER_MIN_NODE_SIZE --master-ipv4-cidr $VPC_CONTROL_PANE --network $VPC_NETWORK --zone $ZONE --shielded-secure-boot --shielded-integrity-monitoring --no-enable-master-authorized-networks --enable-private-nodes --enable-ip-alias --scopes "https://www.googleapis.com/auth/cloud-platform" --labels "turbinia-infra=true" --workload-pool=$DEVSHELL_PROJECT_ID.svc.id.goog --default-max-pods-per-node=20 --enable-autoscaling --min-nodes=$CLUSTER_MIN_NODE_SIZE --max-nodes=$CLUSTER_MAX_NODE_SIZE - else - echo "--no-node-autoscale specified. Node size will remain constant at $CLUSTER_MIN_NODE_SIZE node(s)" - echo "Creating cluster $CLUSTER_NAME with a node size of $CLUSTER_MIN_NODE_SIZE. Each node will be configured with a machine type $CLUSTER_MACHINE_TYPE and disk size of $CLUSTER_MACHINE_SIZE" - gcloud -q --project $DEVSHELL_PROJECT_ID container clusters create $CLUSTER_NAME --machine-type $CLUSTER_MACHINE_TYPE --disk-size $CLUSTER_MACHINE_SIZE --num-nodes $CLUSTER_MIN_NODE_SIZE --master-ipv4-cidr $VPC_CONTROL_PANE --network $VPC_NETWORK --zone $ZONE --shielded-secure-boot --shielded-integrity-monitoring --no-enable-master-authorized-networks --enable-private-nodes --enable-ip-alias --scopes "https://www.googleapis.com/auth/cloud-platform" --labels "turbinia-infra=true" --workload-pool=$DEVSHELL_PROJECT_ID.svc.id.goog --default-max-pods-per-node=20 - fi -else - echo "--no-cluster specified. Authenticating to pre-existing cluster $CLUSTER_NAME" -fi - -# Authenticate to cluster -gcloud -q --project $DEVSHELL_PROJECT_ID container clusters get-credentials $CLUSTER_NAME --zone $ZONE -# Create Kubernetes service account -kubectl get serviceaccounts $SA_NAME || kubectl create serviceaccount $SA_NAME --namespace default -gcloud iam service-accounts add-iam-policy-binding $SA_NAME@$DEVSHELL_PROJECT_ID.iam.gserviceaccount.com --role roles/iam.workloadIdentityUser --member "serviceAccount:$DEVSHELL_PROJECT_ID.svc.id.goog[default/$SA_NAME]" -kubectl annotate serviceaccount $SA_NAME --overwrite --namespace default iam.gke.io/gcp-service-account=$SA_NAME@$DEVSHELL_PROJECT_ID.iam.gserviceaccount.com - -# Go to deployment folder to make changes files -cd $DEPLOYMENT_FOLDER - -# Add service account to deployments -sed -i -e "s//$SA_NAME/g" turbinia-server.yaml turbinia-worker.yaml - -# Disable some jobs -echo "Updating $TURBINIA_CONFIG with disabled jobs" -sed -i -e "s/^DISABLED_JOBS = .*$/DISABLED_JOBS = $DISABLED_JOBS/g" $TURBINIA_CONFIG - -# Update Turbinia config with project info -echo "Updating $TURBINIA_CONFIG config with project info" -sed -i -e "s/^INSTANCE_ID = .*$/INSTANCE_ID = '$INSTANCE_ID'/g" $TURBINIA_CONFIG -sed -i -e "s/^TURBINIA_PROJECT = .*$/TURBINIA_PROJECT = '$DEVSHELL_PROJECT_ID'/g" $TURBINIA_CONFIG -sed -i -e "s/^TURBINIA_ZONE = .*$/TURBINIA_ZONE = '$ZONE'/g" $TURBINIA_CONFIG -sed -i -e "s/^TURBINIA_REGION = .*$/TURBINIA_REGION = '$REGION'/g" $TURBINIA_CONFIG -sed -i -e "s/^CLOUD_PROVIDER = .*$/CLOUD_PROVIDER = 'GCP'/g" $TURBINIA_CONFIG - -# Create File Store instance and update deployment files with created instance -if [[ "$*" != *--no-filestore* ]] ; then - echo "Enabling GCP Filestore API" - gcloud -q --project $DEVSHELL_PROJECT_ID services enable file.googleapis.com - echo "Creating Filestore instance $FILESTORE_NAME with capacity $FILESTORE_CAPACITY" - gcloud -q --project $DEVSHELL_PROJECT_ID filestore instances create $FILESTORE_NAME --file-share=name=$FILESTORE_NAME,capacity=$FILESTORE_CAPACITY --zone=$ZONE --network=name=$VPC_NETWORK -else - echo "Using pre existing Filestore instance $FILESTORE_NAME with capacity $FILESTORE_CAPACITY" -fi - -echo "Updating $TURBINIA_CONFIG config with Filestore configuration" -FILESTORE_IP=$(gcloud -q --project $DEVSHELL_PROJECT_ID filestore instances describe $FILESTORE_NAME --zone=$ZONE --format='value(networks.ipAddresses)' --flatten="networks[].ipAddresses[]") -FILESTORE_LOGS="'\/mnt\/$FILESTORE_NAME\/logs'" -FILESTORE_OUTPUT="'\/mnt\/$FILESTORE_NAME\/output'" -sed -i -e "s//$FILESTORE_IP/g" turbinia-volume-filestore.yaml -sed -i -e "s/turbiniavolume/$FILESTORE_NAME/g" turbinia-volume-filestore.yaml turbinia-volume-claim-filestore.yaml turbinia-server.yaml turbinia-worker.yaml -sed -i -e "s/storage: .*/storage: $FILESTORE_CAPACITY/g" turbinia-volume-filestore.yaml turbinia-volume-claim-filestore.yaml -sed -i -e "s/^LOG_DIR = .*$/LOG_DIR = $FILESTORE_LOGS/g" $TURBINIA_CONFIG -sed -i -e "s/^MOUNT_DIR_PREFIX = .*$/MOUNT_DIR_PREFIX = '\/mnt\/turbinia'/g" $TURBINIA_CONFIG -sed -i -e "s/^SHARED_FILESYSTEM = .*$/SHARED_FILESYSTEM = True/g" $TURBINIA_CONFIG -sed -i -e "s/^OUTPUT_DIR = .*$/OUTPUT_DIR = $FILESTORE_OUTPUT/g" $TURBINIA_CONFIG - -#Create Google Cloud Storage Bucket -if [[ "$*" != *--no-gcs* ]] ; then - echo "Enabling GCS cloud storage" - gcloud -q --project $DEVSHELL_PROJECT_ID services enable storage-component.googleapis.com - echo "Creating GCS bucket gs://$INSTANCE_ID" - gsutil mb -l $REGION gs://$INSTANCE_ID -else - echo "--no-gcs specified. Using pre-existing GCS bucket $INSTANCE_ID" -fi - -echo "Updating $TURBINIA_CONFIG config with GCS bucket configuration" -sed -i -e "s/^GCS_OUTPUT_PATH = .*$/GCS_OUTPUT_PATH = 'gs:\/\/$INSTANCE_ID\/output'/g" $TURBINIA_CONFIG -sed -i -e "s/^BUCKET_NAME = .*$/BUCKET_NAME = '$INSTANCE_ID'/g" $TURBINIA_CONFIG - -# Create main PubSub Topic/Subscription -if [[ "$*" != *--no-pubsub* ]] ; then - echo "Enabling the GCP PubSub API" - gcloud -q --project $DEVSHELL_PROJECT_ID services enable pubsub.googleapis.com - echo "Creating PubSub topic $INSTANCE_ID" - gcloud -q --project $DEVSHELL_PROJECT_ID pubsub topics create $INSTANCE_ID - echo "Creating PubSub subscription $INSTANCE_ID" - gcloud -q --project $DEVSHELL_PROJECT_ID pubsub subscriptions create $INSTANCE_ID --topic=$INSTANCE_ID --ack-deadline=600 - - # Create internal PubSub PSQ Topic/Subscription - echo "Creating PubSub PSQ Topic $INSTANCE_ID-psq" - gcloud -q --project $DEVSHELL_PROJECT_ID pubsub topics create "$INSTANCE_ID-psq" - echo "Creating PubSub PSQ subscription $INSTANCE_ID-psq" - gcloud -q --project $DEVSHELL_PROJECT_ID pubsub subscriptions create "$INSTANCE_ID-psq" --topic="$INSTANCE_ID-psq" --ack-deadline=600 -else - echo "--no-pubsub specified. Using pre-existing PubSub topic/subscription $INSTANCE_ID and PSQ topic/subscription $INSTANCE_ID-psq" -fi - -# Update Turbinia config with PubSub parameters -echo "Updating $TURBINIA_CONFIG with PubSub config" -sed -i -e "s/^TASK_MANAGER = .*$/TASK_MANAGER = 'PSQ'/g" $TURBINIA_CONFIG -sed -i -e "s/^PUBSUB_TOPIC = .*$/PUBSUB_TOPIC = '$INSTANCE_ID'/g" $TURBINIA_CONFIG -sed -i -e "s/^PSQ_TOPIC = .*$/PSQ_TOPIC = '$INSTANCE_ID-psq'/g" $TURBINIA_CONFIG - -# Enable Stackdriver Logging and Stackdriver Traceback -echo "Enabling Cloud Error Reporting and Logging APIs" -gcloud -q --project $DEVSHELL_PROJECT_ID services enable clouderrorreporting.googleapis.com -gcloud -q --project $DEVSHELL_PROJECT_ID services enable logging.googleapis.com -echo "Updating $TURBINIA_CONFIG to enable Stackdriver Traceback and Logging" -sed -i -e "s/^STACKDRIVER_LOGGING = .*$/STACKDRIVER_LOGGING = True/g" $TURBINIA_CONFIG -sed -i -e "s/^STACKDRIVER_TRACEBACK = .*$/STACKDRIVER_TRACEBACK = True/g" $TURBINIA_CONFIG - -# Enable Prometheus -echo "Updating $TURBINIA_CONFIG to enable Prometheus application metrics" -sed -i -e "s/^PROMETHEUS_ENABLED = .*$/PROMETHEUS_ENABLED = True/g" $TURBINIA_CONFIG - -# Disable some jobs -echo "Updating $TURBINIA_CONFIG with disabled jobs" -sed -i -e "s/^DISABLED_JOBS = .*$/DISABLED_JOBS = $DISABLED_JOBS/g" $TURBINIA_CONFIG - -# Set appropriate docker image in deployment file if user specified -if [[ ! -z "$TURBINIA_SERVER_IMAGE" && ! -z "$TURBINIA_WORKER_IMAGE" ]] ; then - echo "Updating deployment files with docker image $TURBINIA_SERVER_IMAGE and $TURBINIA_WORKER_IMAGE" - sed -i -e "s/us-docker.pkg.dev\/osdfir-registry\/turbinia\/release\/turbinia-server:latest$/$TURBINIA_SERVER_IMAGE/g" turbinia-server.yaml - sed -i -e "s/us-docker.pkg.dev\/osdfir-registry\/turbinia\/release\/turbinia-worker:latest$/$TURBINIA_WORKER_IMAGE/g" turbinia-worker.yaml -fi - -# Deploy to cluster -echo "Deploying Turbinia to $CLUSTER_NAME cluster" -./setup-pubsub.sh $TURBINIA_CONFIG - -# Deploy Turbinia Controller -if [[ "$*" == *--deploy-controller* ]] ; then - echo "--deploy-controller specified. Deploying Turbinia controller." - kubectl create -f turbinia-controller.yaml -fi - -# Deploy dfDewey -if [[ "$*" == *--deploy-dfdewey* ]] ; then - echo "Deploying dfDewey datastores to $CLUSTER_NAME cluster" - if [[ -z "$(gcloud -q --project $DEVSHELL_PROJECT_ID filestore instances list --format='value(name)' --filter=name:$FILESTORE_DFDEWEY_NAME)" ]] ; then - echo "Creating Filestore instance $FILESTORE_DFDEWEY_NAME with capacity $FILESTORE_DFDEWEY_CAPACITY" - gcloud -q --project $DEVSHELL_PROJECT_ID filestore instances create $FILESTORE_DFDEWEY_NAME --file-share=name=$FILESTORE_DFDEWEY_NAME,capacity=$FILESTORE_DFDEWEY_CAPACITY --zone=$ZONE --network=name=$VPC_NETWORK - else - echo "Using pre existing Filestore instance $FILESTORE_DFDEWEY_NAME with capacity $FILESTORE_DFDEWEY_CAPACITY" - fi - FILESTORE_DFDEWEY_IP=$(gcloud -q --project $DEVSHELL_PROJECT_ID filestore instances describe $FILESTORE_DFDEWEY_NAME --zone=$ZONE --format='value(networks.ipAddresses)' --flatten="networks[].ipAddresses[]") - sed -i -e "s//$FILESTORE_DFDEWEY_NAME/g" dfdewey-volume-filestore.yaml - sed -i -e "s//$FILESTORE_DFDEWEY_IP/g" dfdewey-volume-filestore.yaml - sed -i -e "s//$FILESTORE_DFDEWEY_CAPACITY/g" dfdewey-volume-filestore.yaml dfdewey-volume-claim-filestore.yaml - sed -i -e "s//$FILESTORE_PG_PATH/g" postgres-server.yaml - sed -i -e "s//$FILESTORE_OS_PATH/g" opensearch-server.yaml - - ./setup-dfdewey.sh $TURBINIA_CONFIG -fi - -# Create backup of turbinia config file if it exists -TURBINIA_OUT="$HOME/.turbiniarc" -if [[ -a $TURBINIA_OUT ]] ; then - backup_file="${TURBINIA_OUT}.$( date +%s )" - mv $TURBINIA_OUT $backup_file - echo "Backing up old Turbinia config $TURBINIA_CONFIG to $backup_file" -fi - -# Make a copy of Turbinia config in user home directory -echo "Creating a copy of Turbinia config in $TURBINIA_OUT" -cp $TURBINIA_CONFIG $TURBINIA_OUT - -echo "Turbinia GKE was succesfully deployed!" -echo "Authenticate via: gcloud container clusters get-credentials $CLUSTER_NAME --zone $ZONE" \ No newline at end of file diff --git a/k8s/tools/destroy-celery-gke.sh b/k8s/tools/destroy-celery-gke.sh deleted file mode 100755 index 082e5cd11..000000000 --- a/k8s/tools/destroy-celery-gke.sh +++ /dev/null @@ -1,110 +0,0 @@ -#!/bin/bash -# Turbinia GKE cleanup script for Celery configuration. -# This script can be used to cleanup the Turbinia Celery stack within GKE. Note that -# this script will not disable any APIs to avoid outage with any other applications -# deployed within the project. -# Requirements: -# - have 'gcloud'installed. -# - authenticate against your GCP project with "gcloud auth login" -# - account being used to run script should have an IAM policy of instance.admin and container.admin used to delete resources. -# - optionally have the GCP project set with "gcloud config set project [you-project-name]" -# -# Use --help to show you commands supported. - -set -o posix -set -e - -# Source cluster config to pull specs to create cluster from. Please review -# the config file and ensure the parameters are set to the cluster you are -# intending to cleanup -DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )" -source $DIR/.clusterconfig -cd $DIR/.. - -if [[ "$*" == *--help || "$*" == *-h ]] ; then - echo "Turbinia cleanup script for Turbinia within Kubernetes" - echo "Options:" - echo "--no-service-account Do not delete the Turbinia service account" - echo "--no-filestore Do not cleanup Turbinia Filestore share" - echo "--no-dfdewey Do not cleanup dfDewey Filestore share" - echo "--no-cluster Do not delete the cluster" - exit 1 -fi - -# Before proceeding, prompt user to confirm deletion -echo "This script is going to do a lot of destructive/irrecoverable actions such as deleting all output, logs, and GCP resources. " -echo -n "Please enter in 'delete all' if you'd like to proceed: " -read response -if [[ $response != "delete all" ]] ; then - echo "'delete all' not specified. Exiting." - exit 1 -fi - -# Check configured gcloud project -if [[ -z "$DEVSHELL_PROJECT_ID" ]] ; then - DEVSHELL_PROJECT_ID=$(gcloud config get-value project) - ERRMSG="ERROR: Could not get configured project. Please either restart " - ERRMSG+="Google Cloudshell, or set configured project with " - ERRMSG+="'gcloud config set project PROJECT' when running outside of Cloudshell." - if [[ -z "$DEVSHELL_PROJECT_ID" ]] ; then - echo $ERRMSG - exit 1 - fi - echo "Environment variable \$DEVSHELL_PROJECT_ID was not set at start time " - echo "so attempting to get project config from gcloud config." - echo -n "Do you want to use $DEVSHELL_PROJECT_ID as the target project? (y / n) > " - read response - if [[ $response != "y" && $response != "Y" ]] ; then - echo $ERRMSG - exit 1 - fi -fi - -# Use either service account or local `gcloud auth` credentials. -if [[ "$*" == *--no-gcloud-auth* ]] ; then - export GOOGLE_APPLICATION_CREDENTIALS=~/$INSTANCE_ID.json -# TODO: Do real check to make sure credentials have adequate roles -elif [[ $( gcloud -q --project $DEVSHELL_PROJECT_ID auth list --filter="status:ACTIVE" --format="value(account)" | wc -l ) -eq 0 ]] ; then - echo "No gcloud credentials found. Use 'gcloud auth login' and 'gcloud auth application-default login' to log in" - exit 1 -fi - -# Delete the cluster -if [[ "$*" != *--no-cluster* ]] ; then - echo "Deleting cluster $CLUSTER_NAME" - gcloud -q --project $DEVSHELL_PROJECT_ID container clusters delete $CLUSTER_NAME --zone $ZONE -fi - -# Delete the Filestore instance -if [[ "$*" != *--no-filestore* ]] ; then - echo "Deleting Filestore instance $FILESTORE_NAME" - gcloud -q --project $DEVSHELL_PROJECT_ID filestore instances delete $FILESTORE_NAME --zone $ZONE -fi -# Delete the dfDewey Filestore instance -if [[ "$*" != *--no-dfdewey* ]] ; then - if [[ -z "$(gcloud -q --project $DEVSHELL_PROJECT_ID filestore instances list --format='value(name)' --filter=name:$FILESTORE_DFDEWEY_NAME)" ]] ; then - echo "Filestore instance $FILESTORE_DFDEWEY_NAME does not exist" - else - echo "Deleting Filestore instance $FILESTORE_DFDEWEY_NAME" - gcloud -q --project $DEVSHELL_PROJECT_ID filestore instances delete $FILESTORE_DFDEWEY_NAME --zone $ZONE - fi -fi - -# Remove the service account if it was being used. -if [[ "$*" != *--no-service-account* ]] ; then - SA_NAME="turbinia" - SA_MEMBER="serviceAccount:$SA_NAME@$DEVSHELL_PROJECT_ID.iam.gserviceaccount.com" - - # Delete IAM roles from the service account - echo "Delete permissions on service account" - gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID --member=$SA_MEMBER --role='roles/compute.instanceAdmin' - gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID --member=$SA_MEMBER --role='roles/logging.logWriter' - gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID --member=$SA_MEMBER --role='roles/errorreporting.writer' - gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID --member=$SA_MEMBER --role='roles/iam.serviceAccountUser' - - # Delete service account - echo "Delete service account" - gcloud -q --project $DEVSHELL_PROJECT_ID iam service-accounts delete "${SA_NAME}@$DEVSHELL_PROJECT_ID.iam.gserviceaccount.com" -fi - -echo "The Turbinia deployment $INSTANCE_ID was succesfully removed from $DEVSHELL_PROJECT_ID" \ No newline at end of file diff --git a/k8s/tools/destroy-pubsub-gke.sh b/k8s/tools/destroy-pubsub-gke.sh deleted file mode 100755 index 361e2534a..000000000 --- a/k8s/tools/destroy-pubsub-gke.sh +++ /dev/null @@ -1,161 +0,0 @@ -#!/bin/bash -# Turbinia GKE cleanup script -# This script can be used to cleanup the Turbinia stack within GKE PubSub. Note that -# this script will not disable any APIs to avoid outage with any other applications -# deployed within the project. -# Requirements: -# - have 'gcloud'installed. -# - authenticate against your GCP project with "gcloud auth login" -# - optionally have the GCP project set with "gcloud config set project [you-project-name]" -# -# Use --help to show you commands supported. - -set -o posix -set -e - -# Source cluster config to pull specs to create cluster from. Please review -# the config file and ensure the parameters are set to the cluster you are -# intending to cleanup -DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )" -source $DIR/.clusterconfig -cd $DIR/.. - -if [[ "$*" == *--help ]] ; then - echo "Turbinia cleanup script for Turbinia within Kubernetes" - echo "Options:" - echo "--no-service-account Do not delete the Turbinia service account" - echo "--no-cloudfunctions Do not cleanup Turbinia Cloud Functions" - echo "--no-datastore Do not cleanup Turbinia Datastore" - echo "--no-filestore Do not cleanup Turbinia Filestore share" - echo "--no-dfdewey Do not cleanup dfDewey Filestore share" - echo "--no-gcs Do not delete the GCS bucket" - echo "--no-pubsub Do not delete the PubSub and PSQ topic/subscription" - echo "--no-cluster Do not delete the cluster" - exit 1 -fi - -# Before proceeding, prompt user to confirm deletion -echo "This script is going to do a lot of destructive/irrecoverable actions such as deleting all output, logs, and GCP resources. " -echo -n "Please enter in 'delete all' if you'd like to proceed: " -read response -if [[ $response != "delete all" ]] ; then - echo "'delete all' not specified. Exiting." - exit 1 -fi - -# Check configured gcloud project -if [[ -z "$DEVSHELL_PROJECT_ID" ]] ; then - DEVSHELL_PROJECT_ID=$(gcloud config get-value project) - ERRMSG="ERROR: Could not get configured project. Please either restart " - ERRMSG+="Google Cloudshell, or set configured project with " - ERRMSG+="'gcloud config set project PROJECT' when running outside of Cloudshell." - if [[ -z "$DEVSHELL_PROJECT_ID" ]] ; then - echo $ERRMSG - exit 1 - fi - echo "Environment variable \$DEVSHELL_PROJECT_ID was not set at start time " - echo "so attempting to get project config from gcloud config." - echo -n "Do you want to use $DEVSHELL_PROJECT_ID as the target project? (y / n) > " - read response - if [[ $response != "y" && $response != "Y" ]] ; then - echo $ERRMSG - exit 1 - fi -fi - -# Use either service account or local `gcloud auth` credentials. -if [[ "$*" == *--no-gcloud-auth* ]] ; then - export GOOGLE_APPLICATION_CREDENTIALS=~/$INSTANCE_ID.json -# TODO: Do real check to make sure credentials have adequate roles -elif [[ $( gcloud -q --project $DEVSHELL_PROJECT_ID auth list --filter="status:ACTIVE" --format="value(account)" | wc -l ) -eq 0 ]] ; then - echo "No gcloud credentials found. Use 'gcloud auth login' and 'gcloud auth application-default login' to log in" - exit 1 -fi - -# Delete the cluster -if [[ "$*" != *--no-cluster* ]] ; then - echo "Deleting cluster $CLUSTER_NAME" - gcloud -q --project $DEVSHELL_PROJECT_ID container clusters delete $CLUSTER_NAME --zone $ZONE -fi - -# Delete the GCS storage bucket -if [[ "$*" != *--no-gcs* ]] ; then - echo "Deleting GCS storage bucket gs://$INSTANCE_ID" - gsutil -q rm -r gs://$INSTANCE_ID -fi - -# Delete PubSub topics -if [[ "$*" != *--no-pubsub* ]] ; then - echo "Deleting PubSub topic $INSTANCE_ID" - gcloud -q --project $DEVSHELL_PROJECT_ID pubsub topics delete $INSTANCE_ID - gcloud -q --project $DEVSHELL_PROJECT_ID pubsub topics delete "$INSTANCE_ID-psq" - - # Delete PubSub subscriptions - gcloud -q --project $DEVSHELL_PROJECT_ID pubsub subscriptions delete $INSTANCE_ID - gcloud -q --project $DEVSHELL_PROJECT_ID pubsub subscriptions delete "$INSTANCE_ID-psq" -fi - -# Delete the Filestore instance -if [[ "$*" != *--no-filestore* ]] ; then - echo "Deleting Filestore instance $FILESTORE_NAME" - gcloud -q --project $DEVSHELL_PROJECT_ID filestore instances delete $FILESTORE_NAME --zone $ZONE -fi -# Delete the dfDewey Filestore instance -if [[ "$*" != *--no-dfdewey* ]] ; then - if [[ -z "$(gcloud -q --project $DEVSHELL_PROJECT_ID filestore instances list --format='value(name)' --filter=name:$FILESTORE_DFDEWEY_NAME)" ]] ; then - echo "Filestore instance $FILESTORE_DFDEWEY_NAME does not exist" - else - echo "Deleting Filestore instance $FILESTORE_DFDEWEY_NAME" - gcloud -q --project $DEVSHELL_PROJECT_ID filestore instances delete $FILESTORE_DFDEWEY_NAME --zone $ZONE - fi -fi - -# Remove cloud functions -if [[ "$*" != *--no-cloudfunctions* ]] ; then - echo "Delete Google Cloud functions" - if gcloud functions --project $DEVSHELL_PROJECT_ID list | grep gettasks; then - gcloud -q --project $DEVSHELL_PROJECT_ID functions delete gettasks --region $REGION - fi - if gcloud functions --project $DEVSHELL_PROJECT_ID list | grep closetask; then - gcloud -q --project $DEVSHELL_PROJECT_ID functions delete closetask --region $REGION - fi - if gcloud functions --project $DEVSHELL_PROJECT_ID list | grep closetasks; then - gcloud -q --project $DEVSHELL_PROJECT_ID functions delete closetasks --region $REGION - fi -fi - -# Cleanup Datastore indexes -if [[ "$*" != *--no-datastore* ]] ; then - echo "Cleaning up Datastore indexes" - gcloud -q --project $DEVSHELL_PROJECT_ID datastore indexes cleanup ../tools/gcf_init/index.yaml -fi - -# Remove the service account if it was being used. -if [[ "$*" == *--no-service-account* ]] ; then - SA_NAME="turbinia" - SA_MEMBER="serviceAccount:$SA_NAME@$DEVSHELL_PROJECT_ID.iam.gserviceaccount.com" - - # Delete IAM roles from the service account - echo "Delete permissions on service account" - gcloud projects remove-iam-policy-binding $DEVSHELL_PROJECT_ID --member=$SA_MEMBER --role='roles/cloudfunctions.admin' - gcloud projects remove-iam-policy-binding $DEVSHELL_PROJECT_ID --member=$SA_MEMBER --role='roles/editor' - gcloud projects remove-iam-policy-binding $DEVSHELL_PROJECT_ID --member=$SA_MEMBER --role='roles/cloudsql.admin' - gcloud projects remove-iam-policy-binding $DEVSHELL_PROJECT_ID --member=$SA_MEMBER --role='roles/datastore.indexAdmin' - gcloud projects remove-iam-policy-binding $DEVSHELL_PROJECT_ID --member=$SA_MEMBER --role='roles/logging.logWriter' - gcloud projects remove-iam-policy-binding $DEVSHELL_PROJECT_ID --member=$SA_MEMBER --role='roles/errorreporting.writer' - gcloud projects remove-iam-policy-binding $DEVSHELL_PROJECT_ID --member=$SA_MEMBER --role='roles/pubsub.admin' - gcloud projects remove-iam-policy-binding $DEVSHELL_PROJECT_ID --member=$SA_MEMBER --role='roles/servicemanagement.admin' - gcloud projects remove-iam-policy-binding $DEVSHELL_PROJECT_ID --member=$SA_MEMBER --role='roles/storage.admin' - gcloud projects remove-iam-policy-binding $DEVSHELL_PROJECT_ID --member=$SA_MEMBER --role='roles/compute.admin' - - # Delete service account - echo "Delete service account" - gcloud -q --project $DEVSHELL_PROJECT_ID iam service-accounts delete "${SA_NAME}@$DEVSHELL_PROJECT_ID.iam.gserviceaccount.com" - - # Remove the service account key - echo "Remove service account key" - rm ~/$TURBINIA_INSTANCE.json - -fi - -echo "The Turbinia deployment $INSTANCE_ID was succesfully removed from $DEVSHELL_PROJECT_ID" \ No newline at end of file diff --git a/k8s/tools/update-gke-infra.sh b/k8s/tools/update-gke-infra.sh deleted file mode 100755 index 4b29ef4cd..000000000 --- a/k8s/tools/update-gke-infra.sh +++ /dev/null @@ -1,305 +0,0 @@ -#!/bin/bash -# Turbinia GKE management script -# This script can be used to manage a Turbinia stack deployed to GKE. -# Requirements: -# - have 'gcloud' and 'kubectl' installed. -# - authenticate against your GCP project with "gcloud auth login" -# - authenticate against your GKE cluster with "gcloud container clusters get-credentials [cluster-name]> --zone [zone] --project [project-name]" -# - optionally have the GCP project set with "gcloud config set project [you-project-name]" -# -# Use --help to show you commands supported. - -set -o posix -set -e - -SERVER_URI="us-docker.pkg.dev/osdfir-registry/turbinia/release/turbinia-server" -WORKER_URI="us-docker.pkg.dev/osdfir-registry/turbinia/release/turbinia-worker" -GCLOUD=`command -v gcloud` -KUBECTL=`command -v kubectl` -LOG_MODE=all -LOG_LINES=10 - - -function usage { - echo "Usage: $0" - echo "-c Choose one of the commands below" - echo - echo "Optional arguments:" - echo "-n The cluster name" - echo "-s The desired number of nodes in the cluster" - echo "-t Docker image tag, eg latest or 20210606" - echo "-T When executing logs command, show last N log lines (tail) for each node" - echo "-H When executing logs command, show first N log lines (head) for each node" - echo "-f Path to Turbinia configuration file" - echo "-k Environment variable name" - echo "-v Environment variable value" - echo - echo "Commands supported:" - echo "change-image Change the docker image loaded by a Turbinia deployment with DOCKER_TAG, use -t" - echo "logs Display logs of a Turbinia server or worker. Use -T or -H to show tail/head of logs for all pods" - echo "show-config Write the Turbinia configuration of an instance to STDOUT" - echo "status Show the running status of server and workers" - echo "cordon Cordon a cluster (Cordoning nodes is a Kubernetes mechanism to mark a node as “unschedulable”.)" - echo "uncordon Uncordon a cluster (Cordoning nodes is a Kubernetes mechanism to mark a node as “unschedulable”.)" - echo "update-config Update the Turbinia configuration of a Turbinia deployment from CONFIG_FILE, use -f" - echo "update-env Update an environment variable on a container, use -k and -v" - echo "resize-cluster Resize the number of nodes in the cluster." - echo "update-latest Update the Turbinia worker and server deployments to latest docker image." - echo -} - -function check_gcloud { - if [ -z $GCLOUD ] - then - echo "gcloud not found, please install first" - exit 1 - fi -} - -function check_kubectl { - if [ -z $KUBECTL ] - then - echo "kubectl not found, please install first" - exit 1 - fi -} - -function show_infra { - $KUBECTL get pod -o=custom-columns=NAME:.metadata.name,STATUS:.status.phase,NODE:.spec.nodeName -} - -function show_nodes { - $KUBECTL get nodes -} - -function get_nodes { - NODES=$($KUBECTL get nodes --output=jsonpath={.items..metadata.name}) -} - -function get_pods { - PODS=$($KUBECTL get pod --output=jsonpath={.items..metadata.name}) -} - -function cordon { - echo "Note this does not stop a cluster. Please resize the cluster to zero to prevent being billed." - # Show status - show_nodes - - # Cordon all nodes - get_nodes - - for NODE in $NODES - do - $KUBECTL cordon $NODE - done - - # Show status - show_nodes -} - -function uncordon { - # Show status - show_nodes - - # Uncordon all nodes - get_nodes - - for NODE in $NODES - do - $KUBECTL uncordon $NODE - done - - # Show status - show_nodes -} - -function show_container_logs { - show_infra - read -p 'Which container name? ' CONTAINER_NAME - $KUBECTL logs $CONTAINER_NAME -} - -function show_container_logs_all { - get_pods - for POD in $PODS - do - echo "Logs for pod $POD:" - echo "------------------" - $KUBECTL logs $POD | $LOG_MODE -n $LOG_LINES - done -} - -function show_config { - echo "Pulling Turbinia configuration from ConfigMap: turbinia-config" - $KUBECTL get configmap turbinia-config -o json | jq '.data.TURBINIA_CONF' | xargs | base64 -d -} - -function update_config { - CONFIG_BASE64=`cat $CONFIG_FILE | base64 -w 0` - # Update ConfigMap with new Turbinia config - $KUBECTL create configmap turbinia-config --from-literal=TURBINIA_CONF=$CONFIG_BASE64 -o yaml --dry-run=client | $KUBECTL replace -f - - rollout_restart -} - -function show_deployment { - $KUBECTL get deployments -} - -function update_env { - show_deployment - - echo "Going to set environment variable $ENVKEY to $ENVVALUE" - read -p 'Which deployment? ' DEPLOYMENT - - # Update the deployment - $KUBECTL set env deployment/$DEPLOYMENT $ENVKEY=$ENVVALUE - -} - -function rollout_restart { - DEPLOYMENTS=$(kubectl get deployments --output=jsonpath={.items..metadata.name}) - - # rollout each deployment - for DEPLOYMENT in $DEPLOYMENTS - do - $KUBECTL rollout restart deployment/$DEPLOYMENT - done - - # Show status - for DEPLOYMENT in $DEPLOYMENTS - do - $KUBECTL rollout status deployment/$DEPLOYMENT - done -} - -function resize_cluster { - echo "Resizing cluster $CLUSTER_NAME to $CLUSTER_SIZE nodes." - read -p 'WARNING: This will delete nodes as well as any associated data on the node. Do you wish to continue? (yes/no) ' ANS - - if [ "$ANS" == "yes" ] ; then - $GCLOUD container clusters resize $CLUSTER_NAME --num-nodes $CLUSTER_SIZE - else - echo "Please enter yes if you'd like to resize the cluster. Exiting..." - exit 0 - fi -} - -function update_docker_image_tag { - echo "Updating the following deployments with docker tag $DOCKER_TAG" - show_deployment - - # Update the turbinia-server deployment - $KUBECTL set image deployment/turbinia-server server=$SERVER_URI:$DOCKER_TAG - - # Update the turbinia-worker deployment - $KUBECTL set image deployment/turbinia-worker worker=$WORKER_URI:$DOCKER_TAG - - # Restart Turbinia Server/Worker Deployments so changes can apply - rollout_restart -} - -while getopts ":c:H:n:s:t:T:f:v:k:" option; do - case ${option} in - c ) - CMD=$OPTARG;; - H ) - LOG_MODE="head" - LOG_LINES=$OPTARG;; - n ) - CLUSTER_NAME=$OPTARG;; - s ) - CLUSTER_SIZE=$OPTARG;; - t ) - DOCKER_TAG=$OPTARG;; - T ) - LOG_MODE="tail" - LOG_LINES=$OPTARG;; - f ) - CONFIG_FILE=$OPTARG;; - k ) - ENVKEY=$OPTARG;; - v ) - ENVVALUE=$OPTARG;; - \? ) - echo "Error: Invalid usage" - usage - exit 1 - exit;; - esac -done -shift $((OPTIND -1)) - -# check whether user had supplied -h or --help . If yes display usage -if [[ ( $# == "--help") || $# == "-h" ]] -then - usage - exit 0 -fi - -if [ -z ${CMD} ]; then - echo "Error: Please provide a command (-c)" - usage - exit 1 -fi - -# check if the gcloud and kubectl binary is present -check_gcloud -check_kubectl - -echo "Running against GCP project:" -$GCLOUD config list project - - -case $CMD in - status) - show_infra - ;; - logs) - if [ $LOG_MODE == "tail" ] || [ $LOG_MODE == "head" ] ; then - show_container_logs_all - else - show_container_logs - fi - ;; - cordon) - cordon - ;; - uncordon) - uncordon - ;; - show-config) - show_config - ;; - update-config) - if [ -z ${CONFIG_FILE} ]; then - echo "Error: No configuration file provided" - usage - exit 1 - fi - update_config - ;; - update-env) - if [ -z ${ENVKEY} ] || [ -z ${ENVVALUE} ] ; then - echo "Error: No key or value set to update environment variable (use -k and -v)" - usage - exit 1 - fi - update_env - ;; - change-image) - if [ -z ${DOCKER_TAG} ]; then - echo "Error: No Docker image tag provided" - usage - exit 1 - fi - update_docker_image_tag - ;; - resize-cluster) - if [ -z ${CLUSTER_NAME} ] || [ -z ${CLUSTER_SIZE} ] ; then - echo "Error: No cluster name or cluster size provided" - usage - exit 1 - fi - resize_cluster - ;; -esac diff --git a/monitoring/grafana/provisioning/dashboards/turbinia.yaml b/monitoring/grafana/provisioning/dashboards/turbinia.yaml deleted file mode 100644 index 048b438ef..000000000 --- a/monitoring/grafana/provisioning/dashboards/turbinia.yaml +++ /dev/null @@ -1,13 +0,0 @@ -apiVersion: 1 -providers: - - name: 'Turbinia' - orgId: 1 - folder: '' - folderUid: '' - type: file - disableDeletion: false - updateIntervalSeconds: 10 - allowUiUpdates: true - options: - path: /etc/grafana/dashboards - foldersFromFilesStructure: true \ No newline at end of file diff --git a/monitoring/grafana/provisioning/datasources/prometheus.yaml b/monitoring/grafana/provisioning/datasources/prometheus.yaml deleted file mode 100644 index 453bd4d9b..000000000 --- a/monitoring/grafana/provisioning/datasources/prometheus.yaml +++ /dev/null @@ -1,6 +0,0 @@ -apiVersion: 1 -datasources: - - name: Prometheus - type: prometheus - access: Server - url: http://:9090 \ No newline at end of file diff --git a/monitoring/k8s/gen-yaml.sh b/monitoring/k8s/gen-yaml.sh deleted file mode 100755 index 2d84ce567..000000000 --- a/monitoring/k8s/gen-yaml.sh +++ /dev/null @@ -1,22 +0,0 @@ -#!/bin/sh -# Turbinia Monitoring generation script -# Please use this script to properly configure -# the .yaml files required for Turbinia k8s deployment. - -# Temporarily save results to this file -TMPSED='tmpsed.json' - -# Turbinia app metrics dashboard -sed -e 's/^/ /' ../grafana/dashboards/turbinia-application-metrics.json > $TMPSED -sed -e "/@@JSONDATA@@/{r $TMPSED" -e ' d}' -i grafana/turbinia-application-metrics.yaml - -# Turbinia health check metrics dashboard -sed -e 's/^/ /' ../grafana/dashboards/turbinia-health-check.json > $TMPSED -sed -e "/@@JSONDATA@@/{r $TMPSED" -e ' d}' -i grafana/turbinia-healthcheck-metrics.yaml - -# Prometheus Alerting -sed -e 's/^/ /' ../prometheus/prometheus.rules.yml > $TMPSED -sed -e "/@@JSONDATA@@/{r $TMPSED" -e ' d}' -i prometheus/turbinia-custom-rules.yaml - -# Remove temp file when done -rm $TMPSED \ No newline at end of file diff --git a/monitoring/k8s/grafana/turbinia-application-metrics.yaml b/monitoring/k8s/grafana/turbinia-application-metrics.yaml deleted file mode 100644 index 101607ea5..000000000 --- a/monitoring/k8s/grafana/turbinia-application-metrics.yaml +++ /dev/null @@ -1,17 +0,0 @@ -apiVersion: v1 -items: -- apiVersion: v1 - data: - turbinia-application-metrics.json: |- - @@JSONDATA@@ - - kind: ConfigMap - metadata: - labels: - app.kubernetes.io/component: grafana - app.kubernetes.io/name: grafana - app.kubernetes.io/part-of: kube-prometheus - app.kubernetes.io/version: 7.5.4 - name: turbinia-application-metrics - namespace: monitoring -kind: ConfigMapList \ No newline at end of file diff --git a/monitoring/k8s/grafana/turbinia-healthcheck-metrics.yaml b/monitoring/k8s/grafana/turbinia-healthcheck-metrics.yaml deleted file mode 100644 index 271bb2ff8..000000000 --- a/monitoring/k8s/grafana/turbinia-healthcheck-metrics.yaml +++ /dev/null @@ -1,17 +0,0 @@ -apiVersion: v1 -items: -- apiVersion: v1 - data: - turbinia-healthcheck-metrics.json: |- - @@JSONDATA@@ - - kind: ConfigMap - metadata: - labels: - app.kubernetes.io/component: grafana - app.kubernetes.io/name: grafana - app.kubernetes.io/part-of: kube-prometheus - app.kubernetes.io/version: 7.5.4 - name: turbinia-healthcheck-metrics - namespace: monitoring -kind: ConfigMapList \ No newline at end of file diff --git a/monitoring/k8s/prometheus/prometheus-additional.yaml b/monitoring/k8s/prometheus/prometheus-additional.yaml deleted file mode 100644 index e62f51451..000000000 --- a/monitoring/k8s/prometheus/prometheus-additional.yaml +++ /dev/null @@ -1,7 +0,0 @@ -- job_name: "turbinia-auto-discover" - scrape_interval: 15s - kubernetes_sd_configs: - - role: pod - namespaces: - names: - - default \ No newline at end of file diff --git a/monitoring/k8s/prometheus/turbinia-custom-rules.yaml b/monitoring/k8s/prometheus/turbinia-custom-rules.yaml deleted file mode 100644 index 1f6624166..000000000 --- a/monitoring/k8s/prometheus/turbinia-custom-rules.yaml +++ /dev/null @@ -1,13 +0,0 @@ -apiVersion: monitoring.coreos.com/v1 -kind: PrometheusRule -metadata: - labels: - app.kubernetes.io/component: exporter - app.kubernetes.io/name: kube-prometheus - app.kubernetes.io/part-of: kube-prometheus - prometheus: k8s - role: alert-rules - name: turbinia-custom-rules - namespace: monitoring -spec: - @@JSONDATA@@ \ No newline at end of file diff --git a/monitoring/prometheus/prometheus.yaml b/monitoring/prometheus/prometheus.yaml deleted file mode 100644 index 81979def6..000000000 --- a/monitoring/prometheus/prometheus.yaml +++ /dev/null @@ -1,26 +0,0 @@ -global: - scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute. - # scrape_timeout is set to the global default (10s). - external_labels: - environment: turbinia-gcp_node - -rule_files: - - '/etc/prometheus/prometheus.rules.yml' - -scrape_configs: - - job_name: 'turbinia-gcp' - gce_sd_configs: - # The GCP Project - - project: '' - zone: '' - filter: labels.turbinia-prometheus=true - refresh_interval: 120s - port: 9100 - - job_name: 'turbinia-app' - gce_sd_configs: - # The GCP Project - - project: '' - zone: '' - filter: labels.turbinia-prometheus=true - refresh_interval: 120s - port: 9200 \ No newline at end of file