VCL stands for the VPP Comms Library, it is a library that allows application to easily leverage the VPP hoststack (i.e. TCP, UDP, TLS, QUIC, ... stack implementation in VPP itself) in a similar way as the libc works for kernel hoststack syscalls. It works by exposing a unix socket over which control messages exchanged and configurations negotiated.
To enable vcl in your calicoVPP cluster, make sure parameter is set here:
# dedicated configmap for VPP settings
kind: ConfigMap
apiVersion: v1
metadata:
name: calico-vpp-config
namespace: calico-vpp-dataplane
data:
CALICOVPP_FEATURE_GATES: |-
{
"vclEnabled": true
}
To enable VCL on a pod, use the following annotation:
apiVersion: v1
kind: Pod
metadata:
name: samplepod
annotations:
"cni.projectcalico.org/vppVcl": "enable"
This will trigger vpp to create the VCL abstract socket. The session creation will happen when issuing listens or connects calls over the VCL (test).
The VPP Hoststack exposes an abstract socket, it's a unix socket over which applications communicate with VPP - leveraging the VCL - in order to open or accept L4+ connections (e.g. TCP) directly in VPP.
Calico/VPP uses abstract socket, which being anchored to a network namespace are better suited for the CNI world rather than regular socket which are mounted on the filesystem. Their address are encoded as a name with a heading null byte, which is usually replaced by the @ sign when printed.
Calico/VPP exposes the @vpp/session abstract socket within the container, to which the VCL should be pointed.
VPP api has a particular syntax for abstract sockets: using the keyword abstract:
then netns_name:
.
For example: abstract:vpp/session,netns_name=/var/run/netns/cni-75e26661-4119-90a4-b9d2-0b00489f76f3
. This syntax appears in vppctl.
A pod supports having both vcl and memif interfaces at the same time by adding both annotations.
VCL can be tested using iperf network performance tool. here is a deployment example with client/server pods for tests.
kubectl create ns iperf3-vcl
kubectl apply -f test/yaml/iperf3-vcl/test.yaml
In an iperf3-vcl pod, at entrypoint, we have test/yaml/iperf3-vcl/iperf3-vcl.sh
script that:
- Generates the
/etc/vpp/vcl.conf
configuration file which will be picked up by the VCL library as we did setVCL_CONFIG=/etc/vpp/vcl.conf
- Uses
LD_PRELOAD=/usr/local/lib/vpp/libvcl_ldpreload.so
which is roughly equivalent of dynamically linking the iperf3 binary against the VCL instead of the libc (so piping the connect() and accept() syscall to VPP instead of linux). - performs
iperf3 -4 -s
which creates the listener session[0:1][T] 11.0.0.139:5201->0.0.0.0:0 LISTEN
To check abstract socket creation in linux, you can run one of the following commands on the pod running vcl:
- Using
lsof
:
lsof -U | grep session
vpp_main 1460865 root 54u unix 0xffff90a6df003300 0t0 18574585 @vpp/session type=SEQPACKET
- Using
ss
:
ss | grep session
u_seq ESTAB 0 0 @vpp/session 18574585 * 18842705
- Using
netstat
:
netstat | grep session
unix 3 [ ] SEQPACKET CONNECTED 18574585 @vpp/session
Warning: The output might be huge on live systems with a lot of connections! it's preferable to grep specific output.
Sessions created in vpp are listeners or established connections in various protocols. To check sessions in vpp cli:
_______ _ _ _____ ___
__/ __/ _ \ (_)__ | | / / _ \/ _ \
_/ _// // / / / _ \ | |/ / ___/ ___/
/_/ /____(_)_/\___/ |___/_/ /_/
vpp# sh session verbose
Connection State Rx-f Tx-f
[0:0][CT:T] 0.0.0.0:5201->0.0.0.0:0 LISTEN 0 0
[0:1][T] 11.0.0.139:5201->0.0.0.0:0 LISTEN 0 0
[0:2][CT:T] 0.0.0.0:5201->0.0.0.0:0 LISTEN 0 0
[0:3][T] 11.0.0.140:5201->0.0.0.0:0 LISTEN 0 0
Thread 0: active sessions 4
- VCL currently only works for the primary interface when used in combination with multinet.
- Dataplane restart is currently not detected by the client application, this can be alleviated by making healthchecks travel over the VCL.
- Source address spoofing is not supported even when disabling uRPF at the pod level.