Operator to provision resources such as Postgres, Redis and storage for you, either in-cluster or through a cloud provider such as Amazon AWS.
This operator depends on the Cloud Credential Operator for creating certain resources such as Amazon AWS Credentials. If using the AWS provider, ensure the Cloud Credential Operator is running.
Note: This operator is in the very early stages of development. There will be bugs and regular breaking changes
Cloud Resource | Openshift | AWS |
---|---|---|
Blob Storage | ❌ | ✔️ |
Redis | ✔️ | ✔️ |
PostgreSQL | ✔️ | ✔️ |
SMTP | ❌ | ✔️ |
Prerequisites:
go
make
- git-secrets - for preventing cloud-provider credentials being included in commits
Ensure you are running at least Go 1.13
.
$ go version
go version go1.13 darwin/amd64
If not, ensure Go Modules are enabled.
Clone this repository into your working directory, outside of $GOPATH
. For example:
$ cd ~/dev
$ git clone git@github.com:integr8ly/cloud-resource-operator.git
Seed the Kubernetes/OpenShift cluster with required resources:
$ make cluster/prepare
Run the operator:
$ make run
Clean up the Kubernetes/OpenShift cluster:
$ make cluster/clean
Currently AWS resources are deployed into a separate Virtual Private Cloud (VPC) than the VPC that the cluster is deployed into. In order for these to communicate, a peering connection
must be established between the two VPCS. To do this:
- Create a new peering connection between the two VPCs.
- Go to
VPC
>Peering Connections
- Click on
Create Peering Connection
- NOTE: This is a two-way communication channel so only one needs to be created.
- Select the newly created connection, then click
Actions
>Accept Request
to accept the peering request.
- Edit your cluster's VPC route table
- Go to
VPC
>Your VPCs
- Find the VPC your cluster is using and click on its route table under the heading
Main Route Table
- Your cluster VPC name is usually in a format of
<cluster-name>-<uid>-vpc
- Your cluster VPC name is usually in a format of
- Select the cluster route table and click on
Action
>Edit routes
- Add a new route with the following details:
Destination: <resource VPC CIDR block> Target: <newly created peering connection>
- Click on
Save routes
- Edit the resource VPC route table
- Go to
VPC
>Your VPCs
- Find the VPC where the AWS resources are provisioned in and click on its route table under the heading
Main Route Table
- The name of this VPC is usually empty or named as
default
- The name of this VPC is usually empty or named as
- Select the cluster route table and click on
Action
>Edit routes
- Add a new route with the following details:
Destination: <your cluster's VPC CIDR block> Target: <newly created peering connection>
- Click on
Save routes
- Edit the Security Groups associated with the resource VPC to ensure database and cache traffic can pass between the two VPCs.
The two VPCs should now be able to communicate with each other.
The cloud resource operator supports the taking of arbitrary snapshots in the AWS provider for both Postgres
and Redis
. To take a snapshot you must create a RedisSnapshot
or PostgresSnapshot
resource, which should reference the Redis
or Postgres
resource you wish to create a snapshot of. The snapshot resource must also exist in the same namespace.
apiVersion: integreatly.org/v1alpha1
kind: RedisSnapshot
metadata:
name: my-redis-snapshot
spec:
# The redis resource name for the snapshot you want to take
resourceName: my-redis-resource
Note You may experience some downtime in the resource during the creation of the Snapshot
The cloud resource operator continuously reconciles using the strat-config as a source of truth for the current state of the provisioned resources. Should these resources alter from the expected the state the operator will update the resources to match the expected state.
There can be circumstances where a provisioned resource would need to be altered. If this is the case, add skipCreate: true
to the resources CR spec
. This will cause the operator to skip creating or updating the resource.
In development
The operator expects two configmaps to exist in the namespace it is watching. These configmaps provide the configuration needed to outline the deployment methods and strategies used when provisioning cloud resources.
The cloud-resource-config
configmap defines which provider should be used to provision a specific resource type. Different deployment types can contain different resource type > provider
mappings.
An example can be seen here.
For example, a workshop
deployment type might choose to deploy a Postgres resource type in-cluster (openshift
), while a managed
deployment type might choose AWS
to deploy an RDS instance instead.
A config map object is expected to exist for each provider (Currently AWS
or Openshift
) that will be used by the operator.
This config map contains information about how to deploy a particular resource type, such as blob storage, with that provider.
In the Cloud Resources Operator, this provider-specific configuration is called a strategy. An example of an AWS strategy configmap can be seen here.
With Provider
and Strategy
configmaps in place, cloud resources can be provisioned by creating a custom resource object for the desired resource type.
An example of a Postgres custom resource can be seen here.
Each custom resource contains:
- A
secretRef
, containing the name of the secret that will be created by the operator with connection details to the resource - A
tier
, in this caseproduction
, which means a production worthy Postgres instance will be deployed - A
type
, in this casemanaged
, which will resolve to a cloud provider specified in thecloud-resource-config
configmap
spec:
# i want my postgres storage information output in a secret named `example-postgres-sec`
secretRef:
name: example-postgres-sec
# i want a postgres storage of a development-level tier
tier: production
# i want a postgres storage for the type managed
type: managed
Postgres, Redis and Blobstorage resources are tagged with the following key value pairs
integreatly.org/clusterId: #clusterid
integreatly.org/product-name: #rhmi component product name
integreatly.org/resource-type: #managed/workshop
integreatly.org/resource-name: #postgres/redis/blobsorage
AWS resources can be queried via the aws cli with the cluster id as in the following example
# clusterid aucunnin-ch5dc
aws resourcegroupstaggingapi get-resources --tag-filters Key=integreatly.org/clusterId,Values=aucunnin-ch5dc | jq
To run e2e tests from a built image:
$ make test/e2e/image IMAGE=<<built image>>
To run e2e tests locally:
$ make test/e2e/local
To run unit tests:
$ make test/unit
- Write tests
- Implement changes
- Run code fixer,
make code/fix
- Run tests,
make test/unit
- Make a PR
Update the operator version in the following files:
-
Update version/version.go (
Version = "<version>"
) -
Update
VERSION
andPREV_VERSION
(the previous version) in the Makefile -
Generate a new cluster service version:
make gen/csv
Commit changes and open pull request. When the PR is accepted, create a new release tag.
Provider
- A service on which a resource type is provisioned e.g.aws
,openshift
Resource type
- Something that can be requested from the operator via a custom resource e.g.blobstorage
,redis
Resource
- The result of a resource type created via a provider e.g.S3 Bucket
,Azure Blob
Deployment type
- Groups mappings of resource types to providers (see here) e.g.managed
,workshop
. This provides a layer of abstraction, which allows the end user to not be concerned with which provider is used to deploy the desired resource.Deployment tier
- Provides a layer of abstraction, which allows the end user to request a resource of a certain level (for example, aproduction
worthy Postgres instance), without being concerned with provider-specific deployment details (such as storage capacity, for example).
There are a few design philosophies for the Cloud Resource Operator:
- Each resource type (e.g.
BlobStorage
,Postgres
) should have its own controller - The end-user should be abstracted from explicitly specifying how the resource is provisioned by default
- What cloud-provider the resource should be provisioned on should be handled in pre-created config objects
- The end-user should not be abstracted from what provider was used to provision the resource once it's available
- If a user requests
BlobStorage
they should be made aware it was created onAmazon AWS
- If a user requests
- Deletion of a custom resource should result in the deletion of the resource in the cloud-provider