@@ -8,11 +8,13 @@ SPDX-License-Identifier: CC-BY-4.0
8
8
9
9
To follow this guide, you will need:
10
10
11
- 1 . Obtained a Kubernetes Cluster: For manual/local effort, generally a Kind
12
- cluster is sufficient and can be used. For detailed information about Kind see
11
+ 1 . A Kubernetes Cluster: For manual/local effort, generally a Kind cluster
12
+ is sufficient and can be used. For detailed information about Kind see
13
13
[ this repo] . An alternative way to obtain a cluster is: [ k3d]
14
- 2 . [ Go 1.21+] installed and configured.
15
- 3 . [ Terraform v1.5.5+] installed locally.
14
+ 2 . [ Go] installed and configured. Check the provider repo you will be working
15
+ with and install the version in the ` go.mod ` file.
16
+ 3 . [ Terraform v1.5.5] installed locally. The last version we used before the
17
+ license change.
16
18
4 . [ goimports] installed.
17
19
18
20
# Adding a New Resource
@@ -21,13 +23,17 @@ There are long and detailed guides showing [how to bootstrap a
21
23
provider] [ provider-guide ] and [ how to configure resources] [ config-guide ] . Here
22
24
we will go over the steps that will take us to ` v1beta1 ` quality.
23
25
24
- 1 . Fork the provider to which you will add resources and create a feature
26
+ 1 . Fork the provider repo to which you will add resources and create a feature
25
27
branch.
26
28
27
- 2 . Go to the Terraform Registry page of the resource you will add, we will add
29
+ 2 . Go to the Terraform Registry page of the resource you will add. We will add
28
30
the resource [ ` aws_redshift_endpoint_access ` ] as an example in this guide.
31
+ We will use this page in the following steps, especially in determining the
32
+ external name configuration, determining conflicting fields, etc.
29
33
30
34
3 . Determine the resource's external name configuration:
35
+ Our external name configuration relies on the Terraform ID format of the
36
+ resource which we find in the import section on the Terraform Registry page.
31
37
Here we'll look for clues about how the Terraform ID is shaped so that we can
32
38
infer the external name configuration. In this case, there is an ` endpoint_name `
33
39
argument seen under the ` Argument Reference ` section and when we look at
@@ -37,11 +43,12 @@ is the same as the `endpoint_name` argument. This means that we can use
37
43
external name config. See section [ External Name Cases] to see how you can infer
38
44
in many different cases of Terraform ID.
39
45
40
- 4 . Check if the resource is an SDK resource or Framework resource from the
41
- [ source code] , for SDK resources, you will see a comment line like
42
- ` // @SDKResource ` in the source code:
46
+ 4 . Check if the resource is an Terraform Plugin SDK resource or Terraform Plugin
47
+ Framework resource from the [ source code] .
43
48
44
- - The ` aws_redshift_endpoint_access ` resource is an SDK resource, go to
49
+ - For SDK resources, you will see a comment line like ` // @SDKResource ` in the
50
+ source code.
51
+ The ` aws_redshift_endpoint_access ` resource is an SDK resource, go to
45
52
` config/externalname.go ` and add the following line to the
46
53
` TerraformPluginSDKExternalNameConfigs ` table:
47
54
@@ -97,11 +104,10 @@ Untracked files:
97
104
see whether any of the fields are represented as separate resources as well.
98
105
It usually goes like this:
99
106
100
- ```
101
- Routes can be defined either directly on the azurerm_iothub
102
- resource, or using the azurerm_iothub_route resource - but the two cannot be
103
- used together.
104
- ```
107
+ > Routes can be defined either directly on the azurerm_iothub
108
+ > resource, or using the azurerm_iothub_route resource - but the two cannot be
109
+ > used together.
110
+
105
111
In such cases, the field should be moved to status since we prefer to
106
112
represent it only as a separate CRD. Go ahead and add a configuration block
107
113
for that resource similar to the following:
@@ -214,10 +220,10 @@ new PR: `git push --set-upstream origin add-redshift-endpoint-access`
214
220
215
221
# Testing Instructions
216
222
217
- While configuring resources, the testing effort is the longest part. Because the
223
+ While configuring resources, the testing effort is the longest part, because the
218
224
characteristics of cloud providers and services can change. This test effort can
219
225
be executed in two main methods. The first one is testing the resources in a
220
- manual way and the second one is using the ` Uptest` which is an automated test
226
+ manual way and the second one is using the [ Uptest] which is an automated test
221
227
tool for Official Providers. `Uptest` provides a framework to test resources in
222
228
an end-to-end pipeline during the resource configuration process. Together with
223
229
the example manifest generation tool, it allows us to avoid manual interventions
@@ -268,17 +274,56 @@ some resources in the tests of the relevant group via an annotation:
268
274
upjet.upbound.io/manual-intervention: "The Certificate needs to be provisioned successfully which requires a real domain."
269
275
` ` `
270
276
271
- The key is important for skipping, we are checking this
272
- ` upjet.upbound.io/manual-intervention` annotation key and if is in there, we
277
+ The key is important for skipping. We are checking this
278
+ ` upjet.upbound.io/manual-intervention` annotation key and if it is in there, we
273
279
skip the related resource. The value is also important to see why we skip this
274
280
resource.
275
281
276
282
```
277
283
NOTE: For resources that are ignored during Automated Tests, manual testing is a
278
- must. Because we need to make sure that all resources published in the ` v1beta1 `
284
+ must, because we need to make sure that all resources published in the ` v1beta1 `
279
285
version is working.
280
286
```
281
287
288
+ ### Running Uptest locally
289
+
290
+ For a faster feedback loop, you might want to run `uptest` locally in your
291
+ development setup. For this, you can use the e2e make target available in
292
+ the provider repositories. This target requires the following environment
293
+ variables to be set:
294
+
295
+ - `UPTEST_CLOUD_CREDENTIALS`: cloud credentials for the provider being tested.
296
+ - `UPTEST_EXAMPLE_LIST`: a comma-separated list of examples to test.
297
+ - `UPTEST_DATASOURCE_PATH`: (optional), see [Injecting Dynamic Values (and Datasource)]
298
+
299
+ You can check the e2e target in the Makefile for each provider. Let's check the [target]
300
+ in provider-upjet-aws and run a test for the resource `examples/ec2/v1beta1/vpc.yaml`.
301
+
302
+ - You can either save your credentials in a file as stated in the target's [comments],
303
+ or you can do it by adding your credentials to the command below.
304
+
305
+ ```console
306
+ export UPTEST_CLOUD_CREDENTIALS="DEFAULT='[default]
307
+ aws_access_key_id = <YOUR-ACCESS_KEY_ID>
308
+ aws_secret_access_key = <YOUR-ACCESS_KEY'"
309
+ ```
310
+
311
+ ``` console
312
+ export UPTEST_EXAMPLE_LIST="examples/ec2/v1beta1/vpc.yaml"
313
+ ```
314
+
315
+ After setting the above environment variables, run ` make e2e ` . If the test is
316
+ successful, you will see a log like the one below, kindly add to the PR
317
+ description this log:
318
+
319
+ ``` console
320
+ --- PASS: kuttl (37.41s)
321
+ --- PASS: kuttl/harness (0.00s)
322
+ --- PASS: kuttl/harness/case (36.62s)
323
+ PASS
324
+ 14:02:30 [ OK ] running automated tests
325
+ ```
326
+
282
327
## Manual Test
283
328
284
329
Configured resources can be tested by using manual methods. This method generally
@@ -313,8 +358,9 @@ spot problems and fix them.
313
358
testing. There are 3 steps we need to verify in manual tests: ` Apply ` , ` Import ` ,
314
359
` Delete ` .
315
360
316
- ** Apply:** We need to apply the example manifest to the cluster.
361
+ ### Apply:
317
362
363
+ We need to apply the example manifest to the cluster.
318
364
319
365
``` bash
320
366
kubectl apply -f examples/redshift/v1beta1/endpointaccess.yaml
@@ -382,7 +428,7 @@ You should see the output below:
382
428
383
429
When all of the fields are ` True`, the `Apply` test was successfully completed!
384
430
385
- ** Import**
431
+ # ## Import
386
432
387
433
There are a few steps to perform the import test, here we will stop the provider,
388
434
delete the status conditions, and check the conditions when we re-run the provider.
@@ -400,7 +446,7 @@ they are the same
400
446
401
447
The import test was successful when the above conditions were met.
402
448
403
- ** Delete**
449
+ # ## Delete
404
450
405
451
Make sure the resource has been successfully deleted by running the following
406
452
command :
@@ -519,13 +565,13 @@ You can see example usages in the big three providers below.
519
565
For ` aws_glue_user_defined_function ` , we see that the ` name ` argument is used
520
566
to name the resource and the import instructions read as the following:
521
567
522
- ```
523
- Glue User Defined Functions can be imported using the
524
- `catalog_id:database_name:function_name`. If you have not set a Catalog ID
525
- specify the AWS Account ID that the database is in, e.g.,
568
+ > Glue User Defined Functions can be imported using the
569
+ > ` catalog_id:database_name:function_name ` . If you have not set a Catalog ID
570
+ > specify the AWS Account ID that the database is in, e.g.,
571
+
572
+ > $ terraform import aws_glue_user_defined_function.func
573
+ 123456789012:my_database: my_func
526
574
527
- $ terraform import aws_glue_user_defined_function.func 123456789012:my_database:my_func
528
- ```
529
575
530
576
Our configuration would look like the following:
531
577
@@ -557,11 +603,9 @@ identifier as Terraform ID.
557
603
For ` azurerm_mariadb_firewall_rule ` , we see that the ` name ` argument is used to
558
604
name the resource and the import instructions read as the following:
559
605
560
- ```
561
- MariaDB Firewall rules can be imported using the resource ID, e.g.
562
-
563
- terraform import azurerm_mariadb_firewall_rule.rule1 /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/mygroup1/providers/Microsoft.DBforMariaDB/servers/server1/firewallRules/rule1
564
- ```
606
+ > MariaDB Firewall rules can be imported using the resource ID, e.g.
607
+ >
608
+ > ` terraform import azurerm_mariadb_firewall_rule.rule1 /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/mygroup1/providers/Microsoft.DBforMariaDB/servers/server1/firewallRules/rule1 `
565
609
566
610
Our configuration would look like the following:
567
611
@@ -586,15 +630,15 @@ identifier as Terraform ID.
586
630
For ` google_container_cluster ` , we see that the ` name ` argument is used to name
587
631
the resource and the import instructions read as the following:
588
632
589
- ``` console
590
- GKE clusters can be imported using the project, location, and name .
591
- If the project is omitted, the default provider value will be used.
592
- Examples:
593
-
594
- $ terraform import google_container_cluster.mycluster projects/my-gcp-project/locations/us-east1-a/clusters/my-cluster
595
- $ terraform import google_container_cluster.mycluster my-gcp-project/us-east1-a/my-cluster
596
- $ terraform import google_container_cluster.mycluster us-east1-a/my-cluster
597
- ```
633
+ > GKE clusters can be imported using the project, location, and name.
634
+ > If the project is omitted, the default provider value will be used .
635
+ > Examples:
636
+ >
637
+ > ``` console
638
+ > $ terraform import google_container_cluster.mycluster projects/my-gcp-project/locations/us-east1-a/clusters/my-cluster
639
+ > $ terraform import google_container_cluster.mycluster my-gcp-project/us-east1-a/my-cluster
640
+ > $ terraform import google_container_cluster.mycluster us-east1-a/my-cluster
641
+ > ` ` `
598
642
599
643
In cases where there are multiple ways to construct the ID, we should take the
600
644
one with the least parameters so that we rely only on required fields because
@@ -673,8 +717,8 @@ detailed guide could also help you.
673
717
674
718
[ this repo ] : https://github.com/kubernetes-sigs/kind
675
719
[ k3d ] : https://k3d.io/
676
- [ Go 1.21+ ] : https://go.dev/doc/install
677
- [ Terraform v1.5.5+ ] : https://developer.hashicorp.com/terraform/install
720
+ [ Go ] : https://go.dev/doc/install
721
+ [ Terraform v1.5.5 ] : https://developer.hashicorp.com/terraform/install
678
722
[ goimports ] : https://pkg.go.dev/golang.org/x/tools/cmd/goimports
679
723
[ provider-guide ] : https://github.com/upbound/upjet/blob/main/docs/generating-a-provider.md
680
724
[ config-guide ] : https://github.com/crossplane/upjet/blob/main/docs/configuring-a-resource.md
@@ -703,3 +747,7 @@ detailed guide could also help you.
703
747
[ `aws_route` ] : https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/route
704
748
[ route-impl ] : https://github.com/upbound/provider-aws/blob/8b3887c91c4b44dc14e1123b3a5ae1a70e0e45ed/config/externalname.go#L172
705
749
[ This section ] : #external-name-cases
750
+ [ Injecting Dynamic Values (and Datasource) ] : https://github.com/crossplane/uptest?tab=readme-ov-file#injecting-dynamic-values-and-datasource
751
+ [ target ] : https://github.com/crossplane-contrib/provider-upjet-aws/blob/e4b8f222a4baf0ea37caf1d348fe109bf8235dc2/Makefile#L257
752
+ [ comments ] : https://github.com/crossplane-contrib/provider-upjet-aws/blob/e4b8f222a4baf0ea37caf1d348fe109bf8235dc2/Makefile#L259
753
+ [ Uptest ] : https://github.com/crossplane/uptest
0 commit comments