The module deploys a multi-node Elasticsearch cluster.
The module requires several additional components that are needed to provision the Elasticsearch cluster.
- At least two subnets to place a load balancer and autoscaling group.
- Route53 zone - the cluster will have an HTTPS endpoint for the cluster.
The easiest way to create subnets in AWS is to use the Service Network Terraform module.
Typical configuration would include at least two public and two private subnets.
module "service-network" {
source = "infrahouse/service-network/aws"
version = "~> 3.2"
service_name = "elastic"
vpc_cidr_block = "10.1.0.0/16"
management_cidr_block = "10.1.0.0/16"
subnets = [
{
cidr = "10.1.0.0/24"
availability-zone = data.aws_availability_zones.available.names[0]
map_public_ip_on_launch = true
create_nat = true
forward_to = null
},
{
cidr = "10.1.1.0/24"
availability-zone = data.aws_availability_zones.available.names[1]
map_public_ip_on_launch = true
create_nat = true
forward_to = null
},
{
cidr = "10.1.2.0/24"
availability-zone = data.aws_availability_zones.available.names[0]
map_public_ip_on_launch = false
create_nat = false
forward_to = "10.1.0.0/24"
},
{
cidr = "10.1.3.0/24"
availability-zone = data.aws_availability_zones.available.names[1]
map_public_ip_on_launch = false
create_nat = false
forward_to = "10.1.1.0/24"
}
]
}The module will create an A record for the cluster in a specified zone.
If the cluster name (passed as var.cluster_name) is 'elastic', the client URL
is going to be https://elastic.ci-cd.infrahouse.com.
The zone can be created in the same Terraform module or accessed as a data source.
data "aws_route53_zone" "cicd" {
name = "ci-cd.infrahouse.com"
}Any new cluster needs to be bootstrapped first. Let's say we want to create a three node cluster.
Declare the cluster and add bootstrap_mode = true to the module inputs.
The size of the autoscaling group will be not three, but one node.
module "test" {
source = "registry.infrahouse.com/infrahouse/elasticsearch/aws"
version = "3.11.0"
providers = {
aws = aws
aws.dns = aws
}
internet_gateway_id = module.service-network.internet_gateway_id
key_pair_name = aws_key_pair.test.key_name
subnet_ids = module.service-network.subnet_public_ids
zone_id = data.aws_route53_zone.cicd.zone_id
bootstrap_mode = true
}After the cluster is bootstrapped, disable the bootstrap mode.
diff --git a/test_data/test_module/main.tf b/test_data/test_module/main.tf
index c13df0d..33cf0d3 100644
--- a/test_data/test_module/main.tf
+++ b/test_data/test_module/main.tf
@@ -12,5 +12,5 @@ module "test" {
subnet_ids = module.service-network.subnet_private_ids
zone_id = data.aws_route53_zone.cicd.zone_id
- bootstrap_mode = true
+ bootstrap_mode = false
}The module creates HTTPS endpoints to access different parts of the Elasticsearch cluster. All endpoints are available as output variables.
- Cluster endpoint:
https://${var.cluster_name}.${data.aws_route53_zone.cluster.name}- Primary endpoint for general cluster access
- Points to master nodes
- Master nodes:
https://${var.cluster_name}-master.${data.aws_route53_zone.cluster.name}- Direct access to master nodes
- Used for cluster management operations
- Data nodes:
https://${var.cluster_name}-data.${data.aws_route53_zone.cluster.name}- Direct access to data nodes
- Used for search and indexing operations
All endpoints use HTTPS with automatically provisioned SSL certificates.
| Name | Version |
|---|---|
| terraform | ~> 1.5 |
| aws | >= 5.11, < 7.0 |
| random | ~> 3.6 |
| tls | ~> 4.0 |
| Name | Version |
|---|---|
| aws | >= 5.11, < 7.0 |
| aws.dns | >= 5.11, < 7.0 |
| random | ~> 3.6 |
| tls | ~> 4.0 |
| Name | Source | Version |
|---|---|---|
| ca_cert_secret | registry.infrahouse.com/infrahouse/secret/aws | ~> 1.0 |
| ca_key_secret | registry.infrahouse.com/infrahouse/secret/aws | ~> 1.0 |
| elastic-password | registry.infrahouse.com/infrahouse/secret/aws | 1.1.0 |
| elastic_cluster | registry.infrahouse.com/infrahouse/website-pod/aws | 5.8.2 |
| elastic_cluster_data | registry.infrahouse.com/infrahouse/website-pod/aws | 5.8.2 |
| elastic_data_userdata | registry.infrahouse.com/infrahouse/cloud-init/aws | 2.2.2 |
| elastic_master_userdata | registry.infrahouse.com/infrahouse/cloud-init/aws | 2.2.2 |
| kibana_system-password | registry.infrahouse.com/infrahouse/secret/aws | 1.1.0 |
| update-dns | registry.infrahouse.com/infrahouse/update-dns/aws | 0.11.1 |
| update-dns-data | registry.infrahouse.com/infrahouse/update-dns/aws | 0.11.1 |
| Name | Description | Type | Default | Required |
|---|---|---|---|---|
| asg_ami | Image for EC2 instances | string |
null |
no |
| asg_create_initial_lifecycle_hook | Used for migration from version 1.* | bool |
true |
no |
| asg_health_check_grace_period | ASG will wait up to this number of seconds for instance to become healthy | number |
900 |
no |
| bootstrap_mode | Set this to true if the cluster is to be bootstrapped | bool |
true |
no |
| cluster_data_count | Number of data nodes in the cluster | number |
3 |
no |
| cluster_master_count | Number of master nodes in the cluster | number |
3 |
no |
| cluster_name | How to name the cluster | string |
"elastic" |
no |
| data_nodes_root_volume_size | Root volume size in data EC2 instance in Gigabytes | number |
30 |
no |
| environment | Name of environment. | string |
"development" |
no |
| extra_files | Additional files to create on an instance. | list(object({ |
[] |
no |
| extra_instance_profile_permissions | A JSON with a permissions policy document. The policy will be attached to the ASG instance profile. | string |
null |
no |
| extra_repos | Additional APT repositories to configure on an instance. | map( |
{} |
no |
| idle_timeout_data | The amount of time a client or target connection can be idle before the load balancer (that fronts data nodes) closes it. | number |
4000 |
no |
| idle_timeout_master | The amount of time a client or target connection can be idle before the load balancer (that fronts master nodes) closes it. | number |
4000 |
no |
| instance_type | Instance type to run the elasticsearch node | string |
"t3.medium" |
no |
| instance_type_data | Instance type to run the elasticsearch data node. If null, use var.instance_type. | string |
null |
no |
| instance_type_master | Instance type to run the elasticsearch master node. If null, use var.instance_type. | string |
null |
no |
| internet_gateway_id | Not used, but AWS Internet Gateway must be present. Ensure by passing its id. | string |
n/a | yes |
| key_pair_name | SSH keypair name to be deployed in EC2 instances | string |
n/a | yes |
| master_nodes_root_volume_size | Root volume size in master EC2 instance in Gigabytes | number |
null |
no |
| max_instance_lifetime_days | The maximum amount of time, in _days_, that an instance can be in service, values must be either equal to 0 or between 7 and 365 days. | number |
0 |
no |
| monitoring_cidr_block | CIDR range that is allowed to monitor elastic instances. | string |
null |
no |
| packages | List of packages to install when the instances bootstraps. | list(string) |
[] |
no |
| puppet_debug_logging | Enable debug logging if true. | bool |
false |
no |
| puppet_environmentpath | A path for directory environments. | string |
"{root_directory}/environments" |
no |
| puppet_hiera_config_path | Path to hiera configuration file. | string |
"{root_directory}/environments/{environment}/hiera.yaml" |
no |
| puppet_manifest | Path to puppet manifest. By default ih-puppet will apply {root_directory}/environments/{environment}/manifests/site.pp. | string |
null |
no |
| puppet_module_path | Path to common puppet modules. | string |
"{root_directory}/environments/{environment}/modules:{root_directory}/modules" |
no |
| secret_elastic_readers | List of role ARNs that will have permissions to read elastic superuser secret. | list(string) |
null |
no |
| smtp_credentials_secret | AWS secret name with SMTP credentials. The secret must contain a JSON with user and password keys. | string |
null |
no |
| snapshot_bucket_prefix | A string prefix to a bucket name for snapshots. Random by default. | string |
null |
no |
| snapshot_force_destroy | Destroy S3 bucket with Elasticsearch snapshots even if non-empty | bool |
false |
no |
| sns_topic_alarm_arn | ARN of SNS topic for Cloudwatch alarms on base EC2 instance. | string |
null |
no |
| ssh_cidr_block | CIDR range that is allowed to SSH into the elastic instances. | string |
"0.0.0.0/0" |
no |
| subnet_ids | List of subnet ids where the elasticsearch instances will be created | list(string) |
n/a | yes |
| ubuntu_codename | Ubuntu version to use for the elasticsearch node | string |
"jammy" |
no |
| zone_id | Domain name zone ID where the website will be available | string |
n/a | yes |
| Name | Description |
|---|---|
| cluster_data_load_balancer_arn | ARN of the load balancer for the cluster data nodes |
| cluster_data_ssl_listener_arn | ARN of cluster data ssl listener of balancer |
| cluster_data_target_group_arn | ARN of the target group for the cluster data nodes |
| cluster_data_url | HTTPS endpoint to access the cluster data nodes |
| cluster_master_load_balancer_arn | ARN of the load balancer for the cluster masters |
| cluster_master_ssl_listener_arn | ARN of cluster masters ssl listener of balancer |
| cluster_master_target_group_arn | ARN of the target group for the cluster master nodes |
| cluster_master_url | HTTPS endpoint to access the cluster masters |
| cluster_url | HTTPS endpoint to access the cluster |
| data_instance_role_arn | Data node EC2 instance profile will have this role ARN |
| elastic_password | Password for Elasticsearch superuser elastic. |
| elastic_secret_id | AWS secret that stores password for user elastic. |
| idle_timeout_data | The amount of time a client or target connection can be idle before the load balancer (that fronts data nodes) closes it. |
| idle_timeout_master | The amount of time a client or target connection can be idle before the load balancer (that fronts master nodes) closes it. |
| kibana_system_password | A password of kibana_system user |
| kibana_system_secret_id | AWS secret that stores password for user kibana_system |
| master_instance_role_arn | Master node EC2 instance profile will have this role ARN |
| snapshots_bucket | AWS S3 Bucket where Elasticsearch snapshots will be stored. |