subcollection | copyright | lastupdated | lasttested | content-type | services | account-plan | completion-time | use-case | ||
---|---|---|---|---|---|---|---|---|---|---|
solution-tutorials |
|
2024-01-05 |
2024-10-04 |
tutorial |
vpc, transit-gateway, direct-link, dns-svcs, cloud-databases, databases-for-postgresql |
paid |
2h |
ApplicationModernization, Cybersecurity, VirtualPrivateCloud |
{{site.data.keyword.attribute-definition-list}}
{: #vpc-transit2} {: toc-content-type="tutorial"} {: toc-services="vpc, transit-gateway, direct-link, dns-svcs, cloud-databases, databases-for-postgresql"} {: toc-completion-time="2h"}
This tutorial may incur costs. Use the Cost Estimator to generate a cost estimate based on your projected usage. {: tip}
A Virtual Private Cloud (VPC) provides network isolation and security in the {{site.data.keyword.cloud_notm}}. A VPC can be a building block that encapsulates a corporate division (marketing, development, accounting, ...) or a collection of microservices owned by a DevSecOps team. VPCs can be connected to an on-premises enterprise and each other. This may create the need to route traffic through centralized firewall-gateway appliances. This tutorial will walk through the implementation of a hub and spoke architecture depicted in this high-level view:
{: caption="Figure 1. Architecture diagram of the tutorial" caption-side="bottom"} {: style="text-align: center;"}
This is part two of a two part tutorial. This part will focus on routing all traffic between VPCs through a transit hub firewall-router. A scalable firewall-router using a Network Load Balancer is discussed and implemented. Private DNS is used for both for microservice identification and {{site.data.keyword.cloud_notm}} service instance identification using a virtual private endpoint gateway.
This tutorial is stand alone so it is not required to execute the steps in part one. If you are not familiar with VPC, network IP layout and planning in the {{site.data.keyword.cloud_notm}}, {{site.data.keyword.tg_short}}, {{site.data.keyword.BluDirectLink}} or asymmetric routing consider reading through part one.
The hub and spoke model supports a number of different scenarios:
- The hub can be the repository for shared micro services used by spokes and enterprise.
- The hub can be a central point of traffic firewall-router and routing between enterprise and the cloud.
- The hub can monitor all or some of the traffic - spoke <-> spoke, spoke <-> transit, or spoke <-> enterprise.
- The hub can hold the VPN resources that are shared by the spokes.
- The hub can be the repository for shared cloud resources, like databases, accessed through virtual private endpoint gateways controlled with VPC security groups and subnet access control lists, shared by spokes and enterprise
There is a companion GitHub repository{: external} that divides the connectivity into a number of incremental layers. In the tutorial thin layers enable the introduction of bite size challenges and solutions.
The following will be explored:
- VPC egress and ingress routing.
- Virtual Network Functions in combination with a Network Load Balancers to support a high availability and scalability.
- Virtual private endpoint gateways.
- DNS resolution.
A layered architecture will introduce resources and demonstrate connectivity. Each layer will add additional connectivity and resources. The layers are implemented in Terraform. It will be possible to change parameters, like number of zones, by changing a Terraform variable. A layered approach allows the tutorial to introduce small problems and demonstrate a solution in the context of a complete architecture. {: shortdesc}
{: #vpc-transit2-objectives}
- Understand the concepts behind a VPC based hub and spoke model for managing all VPC to VPC traffic.
- Understand VPC ingress and egress routing.
- Identify and optionally resolve asymmetric routing issues.
- Understand the use of a Network Load Balancer for a highly available and scalable firewall-router.
- Utilize the DNS service routing and forwarding rules to build an architecturally sound name resolution system.
{: #vpc-transit2-prereqs}
This tutorial requires:
terraform
to use Infrastructure as Code to provision resources,python
to optionally run the pytest commands,- Implementing a firewall-router will require that you enable IP spoofing checks,
See the prerequisites{: external} for a few options including a Dockerfile to easily create the prerequisite environment.
In addition:
- Check for user permissions. Be sure that your user account has sufficient permissions to create and manage all the resources in this tutorial. See the list of:
{: #vpc-transit2-summary-of-part-one}
In part one of this tutorial we carefully planned the address space of the transit and spoke VPCs. The zone based architecture is shown below:
{: caption="Zones" caption-side="bottom"} {: style="text-align: center;"}
This diagram shows the traffic flow. Only the enterprise <-> spoke is passing through the firewall:
{: caption="Traffic flow" caption-side="bottom"} {: style="text-align: center;"}
This was achieved with {{site.data.keyword.dl_short}}, {{site.data.keyword.tg_short}} and VPC routing. All zones are configured similarly and the diagram below shows the details of zone 1:
{: caption="VPC Layout" caption-side="bottom"} {: style="text-align: center;"}
The phantom address prefixes in the transit are used to advertise routes. The CIDR 10.1.0.0/16 covers transit and the spokes and is passed through {{site.data.keyword.dl_short}} to the enterprise as an advertised route. Similarly the CIDR 192.168.0.0/24 covers the enterprise and is passed through the {{site.data.keyword.tg_short}} to the spokes as an advertised route.
Egress routes in the spokes route traffic to the firewall-router. Ingress routes in the transit route enterprise <-> spoke traffic through the firewall-router.
{: #vpc-transit2-provision-vpc-network-resources} {: step}
Often an enterprise uses a transit VPC to monitor the traffic with the firewall-router. In part one only enterprise <-> spoke traffic was flowing through the transit firewall-router. This section is about routing all VPC to VPC traffic through firewall-router.
This diagram shows the traffic flow implemented in this step:
{: caption="Traffic flow" caption-side="bottom"} {: style="text-align: center;"}
All traffic between VPCs will flow through the firewall-router:
- enterprise <-> spoke.
- enterprise <-> transit.
- transit <-> spoke.
- spoke <-> spoke in different VPC.
Traffic within a VPC will not flow through the firewall.
If continuing from part one make special note of the configuration in the terraform.tfvars: all_firewall = true
.
{: tip}
{: #vpc-transit2-apply-layers}
-
The companion GitHub Repository{: external} has the source files to implement the architecture. In a desktop shell clone the repository:
git clone https://github.com/IBM-Cloud/vpc-transit cd vpc-transit
{: codeblock}
-
The config_tf directory contains configuration variables that you are required to configure.
cp config_tf/template.terraform.tfvars config_tf/terraform.tfvars
{: codeblock}
-
Edit config_tf/terraform.tfvars.
- Make the required changes.
- Change the value
all_firwewall = true
.
-
If you don't already have one, obtain a Platform API key and export the API key for use by Terraform:
export IBMCLOUD_API_KEY=YourAPIKEy
{: codeblock}
-
Since it is important that each layer is installed in the correct order and some steps in this tutorial will install multiple layers a shell command ./apply.sh is provided. The following will display help:
./apply.sh
{: codeblock}
-
You could apply all of the layers configured by executing
./apply.sh : :
. The colons are shorthand for first (or config_tf) and last (vpe_dns_forwarding_rules_tf). The -p prints the layers:./apply.sh -p : :
{: codeblock}
-
Apply all of the layers in part one and described above.
./apply.sh : spokes_egress_tf
{: codeblock}
If you were following along in part one some additional ingress routes were added to the transit ingress route table to avoid routing through the firewall-router. In this step these have been removed and the transit ingress route table has just these entries so that all incoming traffic for a zone is routed to the firewall-router in the same zone (your Next hop addresses may be different):
Zone | Destination | Next hop |
---|---|---|
Dallas 1 | 10.1.0.0/16 | 10.1.15.196 |
Dallas 2 | 10.2.0.0/16 | 10.2.15.196 |
Dallas 3 | 10.3.0.0/16 | 10.3.15.196 |
To observe this:
- Open the VPCs in the {{site.data.keyword.cloud_notm}}.
- Select the transit VPC and notice the Address prefixes displayed.
- Click Manage routing tables
- Click on the tgw-ingress transit gateway ingress route table
{: #vpc-transit2-route-spoke-and-transit-to-firewall-router}
Routing all cloud traffic originating at the spokes through the transit VPC firewall-router in the same zone as the originating instance is accomplished by these routes in the spoke's default egress routing table (shown for Dallas/us-south):
Zone | Destination | Next hop |
---|---|---|
Dallas 1 | 10.0.0.0/8 | 10.1.15.196 |
Dallas 2 | 10.0.0.0/8 | 10.2.15.196 |
Dallas 3 | 10.0.0.0/8 | 10.3.15.196 |
Similarly in the transit VPC route all enterprise and cloud traffic through the firewall-router in the same zone as the originating instance. For example a transit test instance 10.1.15.4 (Dallas 1) attempting to connect with 10.2.0.4 (spoke 0, zone 2) will be sent through the firewall-router in zone 1: 10.1.15.196.
Routes in transit's default egress routing table (shown for Dallas/us-south):
Zone | Destination | Next hop |
---|---|---|
Dallas 1 | 10.0.0.0/8 | 10.1.15.196 |
Dallas 2 | 10.0.0.0/8 | 10.2.15.196 |
Dallas 3 | 10.0.0.0/8 | 10.3.15.196 |
Dallas 1 | 192.168.0.0/16 | 10.1.15.196 |
Dallas 2 | 192.168.0.0/16 | 10.2.15.196 |
Dallas 3 | 192.168.0.0/16 | 10.3.15.196 |
{: #vpc-transit2-do-not-route-intra-zone-traffic-to-firewall-router}
In this example Intra-VPC traffic will not pass through the firewall-router. For example resources in spoke 0 can connect to other resources on spoke 0 directly. To accomplish this additional more specific routes can be added to delegate internal traffic. For example in spoke 0, which has the CIDR ranges: 10.1.0.0/24, 10.2.0.0/24, 10.3.0.0/24 the internal routes can be delegated.
Routes in spoke 0's default egress routing table (shown for Dallas/us-south):
Zone | Destination | Next hop |
---|---|---|
Dallas 1 | 10.1.0.0/24 | delegate |
Dallas 1 | 10.2.0.0/24 | delegate |
Dallas 1 | 10.3.0.0/24 | delegate |
Dallas 2 | 10.1.0.0/24 | delegate |
Dallas 2 | 10.2.0.0/24 | delegate |
Dallas 2 | 10.3.0.0/24 | delegate |
Dallas 3 | 10.1.0.0/24 | delegate |
Dallas 3 | 10.2.0.0/24 | delegate |
Dallas 3 | 10.3.0.0/24 | delegate |
Similar routes are added to the transit and other spokes.
{: #vpc-transit2-firewall-subnets}
What about the firewall-router itself? This was not mentioned earlier but in anticipation of this change there was a egress_delegate router created in the transit VPC that delegates routing to the default for all destinations. It is only associated with the firewall-router subnets so the firewall-router is not effected by the changes to the default egress routing table used by the other subnets. Check the routing tables for the transit VPC for more details. Visit the VPCs in the {{site.data.keyword.cloud_notm}} console. Select the transit VPC and then click on Manage routing tables, click on the egress-delegate routing table, click on the Subnets tab and note the -fw subnets used for firewall-routers.
{: #vpc-transit2-apply-and-test-more-firewall}
-
Apply the layer:
./apply.sh all_firewall_tf
{: codeblock}
-
Run the test suite.
Your expected results are: cross zone transit <-> spoke and spoke <-> spoke will be FAILED:
pytest -m "curl and lz1 and (rz1 or rz2)"
{: codeblock}
{: #vpc-transit2-fix-cross-zone-routing}
As mentioned earlier for a system to be resilient across zonal failures it is best to eliminate cross zone traffic. If cross zone support is required additional egress routes can be added. The problem for spoke 0 to spoke 1 traffic is shown in this diagram:
{: caption="Fixing cross zone routing" caption-side="bottom"} {: style="text-align: center;"}
The green path is an example of the originator spoke 0 zone 2 10.2.0.4 routing to spoke 1 zone 1 10.1.1.4. The matching egress route is:
Zone | Destination | Next hop |
---|---|---|
Dallas 2 | 10.0.0.0/8 | 10.2.15.196 |
Moving left to right the firewall-router in the middle zone, zone 2, of the diagram is selected. On the return path zone 1 is selected.
To fix this a few more specific routes need to be added to force the higher number zones to route to the lower zone number firewalls when a lower zone number destination is specified. When referencing an equal or higher numbered zone continue to route to the firewall in the same zone.
{: caption="Cross zone routing enabled" caption-side="bottom"} {: style="text-align: center;"}
Routes in each spoke's default egress routing table (shown for Dallas/us-south):
Zone | Destination | Next hop |
---|---|---|
Dallas 2 | 10.1.0.0/16 | 10.1.15.196 |
Dallas 3 | 10.1.0.0/16 | 10.1.15.196 |
Dallas 3 | 10.2.0.0/16 | 10.2.15.196 |
These routes are also going to correct a similar transit <--> spoke cross zone asymmetric routing problem. Consider transit worker 10.1.15.4 -> spoke worker 10.2.0.4. Traffic from transit worker in zone 1 will choose the firewall-router in the zone 1 (same zone). On the return trip instead of firewall-router in zone 2 (same zone) now firewall-router in zone 1 will be used.
-
Apply the all_firewall_asym layer:
./apply.sh all_firewall_asym_tf
{: codeblock}
-
Run the test suite.
Your expected results are: all tests PASSED, run them in parallel:
pytest -n 10 -m curl
{: codeblock}
All traffic between VPCs is now routed through the firewall-routers.
{: #vpc-transit2-high-performance-ha-firewall-router} {: step}
To prevent a firewall-router from becoming the performance bottleneck or a single point of failure it is possible to add a VPC Network Load Balancer to distribute traffic to the zonal firewall-routers to create a Highly Available, HA, firewall-router. Check your firewall-router documentation to verify it supports this architecture.
{: caption="High Availability Firewall" caption-side="bottom"}
This diagram shows a single zone with a Network Load Balancer (NLB) configured in route mode fronting two firewall-routers. To see this constructed it is required to change the configuration and apply again.
-
Change these two variables in config_tf/terraform.tfvars:
firewall_nlb = true number_of_firewalls_per_zone = 2
This change results in the IP address of the firewall-router changing from the firewall-router instance used earlier to the IP address of the NLB. The IP address change need to be applied to a number of VPC route table routes in the transit and spoke VPCs. It is best to apply all of the layers previously applied:
-
Apply all the layers through the all_firewall_asym_tf layer:
./apply.sh : all_firewall_asym_tf
{: codeblock}
Observe the changes that were made:
- Open the Load balancers for VPC.
- Select the load balancer in zone 1 (Dallas 1/us-south-1) it has the suffix fw-z1-s3.
- Note the Private IPs.
Compare the Private IPs with those in the transit VPC ingress route table:
- Open the Virtual Private Clouds.
- Select the transit VPC.
- Click on Manage routing tables.
- Click on the tgw-ingress routing table. Notice the Next hop IP address matches one of the NLB Private IPs
Verify resiliency:
-
Run the spoke 0 zone 1 tests:
pytest -k r-spoke0-z1 -m curl
{: codeblock}
-
Open the Virtual server instances for VPC
-
Stop traffic to the 0 firewall instance by specifying a security group that will not allow inbound port 80. Locate the instance with the suffix fw-z1-s3-0 and open the details view:
- Scroll down and hit the pencil edit next to the Network Interface
- Uncheck the x-fw-inall-outall
- Check the x-fw-in22-outall
- Click Save
-
Run the pytest again. It will indicate failures. It will take a few minutes for the NLB to stop routing traffic to the unresponsive instance, at which point all tests will pass. Continue waiting and running pytest until all tests pass.
The NLB firewall is no longer required. Remove the NLB firewall:
-
Change these two variables in config_tf/terraform.tfvars:
firewall_nlb = false number_of_firewalls_per_zone = 1
-
Apply all the layers through the all_firewall_asym_tf layer:
./apply.sh : all_firewall_asym_tf
{: codeblock}
{: #vpc-transit2-note-about-nlb-configured-in-routing-mode}
NLB route mode will rewrite route table entries - always keeping the active NLB appliance IP address in the route table during a fail over. But this is only done for routes in the transit VPC that contains the NLB. The spoke has egress routes that were initialized with one of the NLB appliance IPs. The spoke next hop will not be updated on NLB appliance fail over!
It will be required to maintain an ingress route in the transit VPC which will be rewritten by the NLB to reflect the active appliance. The spoke egress route will deliver packets to the correct zone of the transit VPC. Routing within the transit VPC zone will find the matching ingress rule which will contain the active appliance.
Below is the transit VPC ingress route table discussed earlier. The next hop will be kept up to date with the active NLB appliance. Note that Dallas 3 has a change written by the NLB route mode service to reflect the active appliance.
Zone | Destination | Next hop |
---|---|---|
Dallas 1 | 10.0.0.0/8 | 10.1.15.196 |
Dallas 2 | 10.0.0.0/8 | 10.2.15.196 |
Dallas 3 | 10.0.0.0/8 | 10.3.15.197 |
{: #vpc-transit2-dns} {: step}
The {{site.data.keyword.dns_full_notm}} service is used to convert names to IP addresses. In this example a separate DNS service is created for the transit and each of the spokes. This approach provides isolation between teams and allows the architecture to spread across different accounts. If a single DNS service in a single account meets your isolation requirements it will be simpler to configure. All zones are configured similarly and below is a diagram for a two zone architecture:
{: caption="DNS Layout" caption-side="bottom"} {: style="text-align: center;"}
{: #vpc-transit2-dns-resources}
-
Apply the dns_tf layer to create the DNS services and add a DNS zone for each VPC and an A record for each of the test instances:
./apply.sh dns_tf
{: codeblock}
-
Open the Resource list in the {{site.data.keyword.cloud_notm}} console.
-
Expand the Networking section and notice the DNS Services.
-
Locate and click to open the instance with the suffix spoke0.
-
Click on the DNS zone with the suffix spoke0.com. Notice the A records associated with the test instances that are in the spoke instance.
-
Click on the Custom resolver tab on the left and click on the resolver with the suffix spoke0.com.
-
Click on the Forwarding rules tab and notice the forwarding rules.
Separate DNS instances learn each other's DNS names with forwarding rules. In the diagram there are arrows that indicate a forwarding rule. The associated table are a list of zones that could be resolved by the forwarding rule. Starting on the left notice that the enterprise DNS forwarding rule will look to the transit for the DNS zones: x-transit.com, x-spoke0.com, and x-spoke1.com. The transit DNS instance can resolve x-transit.com and has its own forwarding rules to the enterprise and spokes to resolve the rest. Similarly the spokes rely on the transit DNS instance to resolve the enterprise, transit and the other spokes. {: tip}
-
Optionally explore the other DNS instances and find similarly named DNS zones and A records for the other test instances.
{: #vpc-transit2-dns-testing}
There is a set of curl DNS tests that are available in the pytest script. These tests will curl using the DNS name of the remote. There are quite a few so run the tests in parallel:
pytest -n 10 -m dns
{: codeblock}
{: #vpc-transit2-VPE} {: step}
VPC allows private access to IBM Cloud Services through {{site.data.keyword.vpe_full}}. The VPEs allow fine grain network access control via standard {{site.data.keyword.vpc_short}} controls:
{: caption="Adding virtual private endpoint gateways" caption-side="bottom"} {: style="text-align: center;"}
-
Create a {{site.data.keyword.databases-for-postgresql_full_notm}} instance and VPEs for the transit and each of the spoke VPCs, by applying the vpe_transit_tf and vpe_spokes_tf layers:
./apply.sh vpe_transit_tf vpe_spokes_tf
{: codeblock}
-
There is a set of vpe and vpedns tests that are available in the pytest script. The vpedns test will verify that the DNS name of a {{site.data.keyword.databases-for-postgresql}} instance is within the private CIDR block of the enclosing VPC. The vpe test will execute a psql command to access the {{site.data.keyword.databases-for-postgresql}} instance remotely. Test vpe and vpedns from spoke 0 zone 1:
- Expected fail: cross vpc access to the postgresql DNS names.
pytest -m 'vpe or vpedns' -k spoke0-z1
{: codeblock}
Notice the failing vpedns tests like this one:
FAILED py/test_transit.py::test_vpe_dns_resolution[postgresql spoke0-z1-worker -> transit 720ef5d6-f22d-42ac-a4c9-54b0a71ad5e1.c5kmhkid0ujpmrucb800.private.databases.appdomain.cloud] - AssertionError: 166.9.90.7 not in ['10.1.15.128/26', '10.2.15.128/26'] from 720ef5d6-f22d-42ac-a4c9-54b0a71ad5e1.c5kmhkid0ujpmrucb800.private.databases.appdomain.cloud
These are failing due to DNS resolution. A Postgresql name, <id>.private.databases.appdomain.cloud, should resolve to a VPE. The DNS names can not be resolved by the private DNS resolvers. Adding additional DNS forwarding rules will resolve this issue.
To make the DNS names for the VPE available outside the DNS owning service it is required to update the DNS forwarding rules.
- For enterprise
appdomain.com
will forward to the transit. - For transit the fully qualified DNS name of the {{site.data.keyword.databases-for-postgresql}} instance will be forwarded to the spoke instance that owns the {{site.data.keyword.databases-for-postgresql}} instance.
- For spoke_from -> spoke_to access to Postgresql the spoke_from needs the DNS name for the {{site.data.keyword.databases-for-postgresql}} instance. The fully qualified Postgresql name in spoke_from DNS instance will be forwarded to the transit.
- The transit forward fully qualified Postgresql names to the corresponding spoke.
{: caption="Enabling DNS for virtual private endpoints" caption-side="bottom"} {: style="text-align: center;"}
The diagram uses transit-.databases.appdomain.cloud to identify the database in the transit instead of the fully qualified name like 5c60b3e4-1920-48a3-8e7b-98d5edc6c38a.c7e0lq3d0hm8lbg600bg.private.databases.appdomain.cloud.
-
Apply the vpe_dns_forwarding_rules_tf layer:
./apply.sh vpe_dns_forwarding_rules_tf
{: codeblock}
-
Verify that all VPEs can be accessed from all test instances.
Your expected results are: all PASSED
pytest -m 'vpe or vpedns'
{: codeblock}
It can take a few tries for the DNS names to be resolved accurately. So try the test at least three times. All tests should pass except the enterprise to spoke VPE tests:
All tests in this tutorial should now pass. There are quite a few. Run them in parallel:
pytest -n 10 -m curl
{: codeblock}
{: #vpc-transit2-production-notes}
The VPC reference architecture for IBM Cloud for Financial Services has much more detail on securing workloads in the {{site.data.keyword.cloud_notm}}.
Some obvious changes to make:
- CIDR blocks were chosen for clarity and ease of explanation. The Availability Zones in the Multi zone Region could be 10.1.0.0/10, 10.64.0.0/10, 10.128.0.0/10 to conserve address space. Similarly the address space for Worker nodes could be expanded at the expense of firewall, DNS and VPE space.
- Security Groups for each of the network interfaces for worker VSIs, Virtual Private Endpoint Gateways, DNS Locations and firewalls should all be carefully considered.
- Network Access Control Lists for each subnet should be carefully considered.
- Floating IPs were attached to all test instances to support connectivity tests via SSH. This is not required or desirable in production.
- Implement context-based restrictions rules to further control access to all resources.
In this tutorial you created a hub VPC and a set of spoke VPCs. You routed all cross VPC traffic through a transit VPC firewall-router. DNS services were created for each VPC. DNS forwarding rules were created between the services for workloads and a virtual private endpoint gateways.
{: #vpc-transit2-remove-resources}
Execute terraform destroy
in all directories in reverse order using the ./apply.sh
command:
./apply.sh -d : :
{: codeblock}
{: #vpc-transit2-expand-tutorial}
Your architecture may not be the same as the one presented, but will likely be constructed from the fundamental components discussed here. Ideas to expand this tutorial:
- Integrate incoming public Internet access using {{site.data.keyword.cis_full}}.
- Add {{site.data.keyword.fl_full}} capture in the transit.
- Put each of the spokes in a separate account in an enterprise.
- Force some of the spoke to spoke traffic through the firewall and some not through the firewall.
- Replace the worker VSIs with {{site.data.keyword.openshiftlong_notm}} and VPC load balancer.
- Force all out bound traffic through the firewall in the transit VPC and through Public gateways .
{: #vpc-transit2-related}