Clone the repository:
git clone https://github.com/MaterializeInc/terraform-provider-materialize.git
cd terraform-provider-materialize
Compile the provider
make install
The documentation is generated from the provider's schema. To generate the documentation, run:
make docs
To run the unit tests run:
make test
To run the acceptance tests which will simulate running Terraform commands you will need to set the necessary envrionment variables and start the docker compose:
# Start all containers
docker compose up -d --build
Add the following to your hosts
file so that the provider can connect to the mock services:
127.0.0.1 materialized frontegg cloud
You can then run the acceptance tests:
make testacc
To run the full integration project, set the necessary env variables and start the docker compose similar to the acceptance tests. Then to interact with the provider you can run:
# SaaS tests
docker exec provider terraform init
docker exec provider terraform apply -auto-approve -compact-warnings
docker exec provider terraform plan -detailed-exitcode
docker exec provider terraform destroy -auto-approve -compact-warnings
# Self-hosted tests
docker exec --workdir /usr/src/app/integration/self_hosted provider terraform init
docker exec --workdir /usr/src/app/integration/self_hosted provider terraform apply -auto-approve -compact-warnings
docker exec --workdir /usr/src/app/integration/self_hosted provider terraform plan -detailed-exitcode
docker exec --workdir /usr/src/app/integration/self_hosted provider terraform destroy -auto-approve -compact-warnings
Note: You might have to delete the
integration/.terraform
,integration/.terraform.lock.hcl
andintegration/terraform.tfstate*
files before running the tests. Or if you are running the self-hosted tests, you will have to delete theintegration/self_hosted/.terraform
,integration/self_hosted/.terraform.lock.hcl
andintegration/self_hosted/terraform.tfstate*
files.
Terraform has detailed logs that you can enable by setting the TF_LOG
environment variable to any value. Enabling this setting causes detailed logs to appear on stderr
.
If you add a feature in Materialize, eventually it will need to be added to the Terraform provider. Here is a quick guide on how to update the provider.
Say we wanted to add size
to the clusters.
In the materialize package find the corresponding resource. Within that file add the new feature to the builder:
type ClusterBuilder struct {
ddl Builder
clusterName string
replicationFactor int
size string // Add new field
}
You can then update the Create
method and, if necessary, add a method for handling any updates.
Next you can update the query that Terraform will run to find that feature:
type ClusterParams struct {
ClusterId sql.NullString `db:"id"`
ClusterName sql.NullString `db:"name"`
Managed sql.NullBool `db:"managed"`
Size sql.NullString `db:"size"`// Add new field
}
var clusterQuery = NewBaseQuery(`
SELECT
mz_clusters.id,
mz_clusters.name,
mz_clusters.managed,
mz_clusters.size // Add new field
FROM mz_clusters`)
After you update the query. You will also need to update the mock query in the testhelpers package so the tests will pass.
In the resources package find the corresponding resource. Within that file add the new attribute to the Terraform schema:
var clusterSchema = map[string]*schema.Schema{
"name": ObjectNameSchema("cluster", true, true),
"size": {
Description: "The size of the cluster.",
Optional: true,
},
"region": RegionSchema(),
}
You can then update the read context clusterRead
:
if err := d.Set("size", s.Size.String); err != nil {
return diag.FromErr(err)
}
And the create context clusterCreate
:
if v, ok := d.GetOk("size"); ok {
b.Size(v.(string))
}
If the resource can be updated we would also have to change the update context clusterUpdate
:
if d.HasChange("size") {
metaDb, region, err := utils.GetDBClientFromMeta(meta, d)
if err != nil {
return diag.FromErr(err)
}
_, newSize := d.GetChange("size")
b := materialize.NewClusterBuilder(metaDb, o)
if err := b.Resize(newSize.(string)); err != nil {
return diag.FromErr(err)
}
}
In the datasources package find the corresponding resource. Within that file add the new field the Schema
for Cluster
:
Schema: map[string]*schema.Schema{
"clusters": {
Type: schema.TypeList,
Computed: true,
Description: "The clusters in the account",
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"id": {
Type: schema.TypeString,
Computed: true,
},
"name": {
Type: schema.TypeString,
Computed: true,
},
"size": { // Add new field
Type: schema.TypeString,
Computed: true,
},
},
},
},
"region": {
Type: schema.TypeString,
Computed: true,
},
},
And finally update the mapping in clusterRead
:
for _, p := range dataSource {
clusterMap := map[string]interface{}{}
clusterMap["id"] = p.ClusterId.String
clusterMap["name"] = p.ClusterName.String
clusterMap["size"] = p.Size.String // Add new field
clusterFormats = append(clusterFormats, clusterMap)
}
To cut a new release of the provider, create a new tag and push that tag. This will trigger a GitHub Action to generate the artifacts necessary for the Terraform Registry.
git tag -a vX.Y.Z -m vX.Y.Z
git push origin vX.Y.Z