Skip to content

Latest commit

 

History

History
95 lines (76 loc) · 3.26 KB

module-6-egw-support.md

File metadata and controls

95 lines (76 loc) · 3.26 KB

Module 6 - Enable egress gateway support

Note: The steps explained here are oriented to this practical exercise of creating and using a Calico Cloud egress gateway with an AWS EKS cluster. If you are interested in learning more about the theory behind the following steps, please refer to the Calico Cloud documentation.

  1. Create the IPReservation for the AWS reserved IPs. This will avoid the Calico IPAM to allocate those IPs reserved by AWS to workloads.

    kubectl create -f - <<EOF
    apiVersion: projectcalico.org/v3
    kind: IPReservation
    metadata:
      name: aws-ip-reservations
    spec:
      reservedCIDRs:
      - 192.168.0.64/30
      - 192.168.0.95
      - 192.168.0.96/30
      - 192.168.0.127
    EOF
  2. Enable the support for the egress gateway per pod and per namespace.

    kubectl patch felixconfiguration default --type='merge' -p \
        '{"spec":{"egressIPSupport":"EnabledPerNamespaceOrPerPod"}}'
  3. Enable policy sync API. The egress gateway container image requires the policy sync API to be enabled.

    kubectl patch felixconfiguration.p default --type='merge' -p \
        '{"spec":{"policySyncPathPrefix":"/var/run/nodeagent"}}'
  4. Enable AWS-backed IP pools using the Secondary-IP-per-workload mode. To enable Secondary-IP-per-workload mode, set the field to Enabled (the name Enabled predates the addition of the ENI-per-workload mode):

    kubectl patch felixconfiguration default --type='merge' -p \
        '{"spec":{"awsSecondaryIPSupport":"Enabled"}}'
    # verify the nodes for aws-seconday-ipv4 support: 
    kubectl describe node `kubectl get nodes -o=jsonpath='{.items[0].metadata.name}'` | grep aws-secondary
  5. Configure IP pools backed by VPC subnets. Create the IPPool's to be used by the second ENI on the nodes using the existing subnets.

    kubectl create -f - <<EOF
    apiVersion: projectcalico.org/v3
    kind: IPPool
    metadata:
      name: hosts-1a
    spec:
      cidr: 192.168.0.64/28
      allowedUses: ["HostSecondaryInterface"]
      awsSubnetID: $SUBNETPUBEGW1AID
      blockSize: 32
      disableBGPExport: true
    ---
    apiVersion: projectcalico.org/v3
    kind: IPPool
    metadata:
      name: hosts-1b
    spec:
      cidr: 192.168.0.96/28
      allowedUses: ["HostSecondaryInterface"]
      awsSubnetID: $SUBNETPUBEGW1BID
      blockSize: 32
      disableBGPExport: true
    EOF

    Check the IPPool's creation

    kubectl get ippools -o=custom-columns='NAME:.metadata.name,CIDR:.spec.cidr'
  6. Copy the pull secret from calico-system namespace to the default namespace to authorize the download of the egress gateway image.

    kubectl get secret tigera-pull-secret --namespace=calico-system -o yaml | \
       grep -v '^[[:space:]]*namespace:[[:space:]]*calico-system' | \
       kubectl apply --namespace=default -f -

➡️ Module 7 - Deploy an Egress Gateway for a per pod selection

⬅️ Module 5 - Create the test environment
↩️ Back to Main