diff --git a/content/en/docs/_index.md b/content/en/docs/_index.md
index 142f195..562e230 100644
--- a/content/en/docs/_index.md
+++ b/content/en/docs/_index.md
@@ -7,6 +7,5 @@ menu:
weight: 20
---
-The documentation for each sub-project of the Confidential Containers project is available in the respective tabs, checkout the links below.
-
-[These slides](https://docs.google.com/presentation/d/1cMiehbYq5vRcSwZ0kp_VMCr687ochBtblFFvtx1pl7E/edit#slide=id.g280af1fc7d0_0_122) provide a high-level overview of all the subprojects in the Confidential Containers.
+Confidential Containers is an open source project that brings confidential computing to Cloud Native environments, leveraging hardware technology to protect complex workloads.
+Confidential Containers is a CNCF sandbox project.
diff --git a/content/en/docs/architecture/_index.md b/content/en/docs/architecture/_index.md
new file mode 100644
index 0000000..6d2660e
--- /dev/null
+++ b/content/en/docs/architecture/_index.md
@@ -0,0 +1,7 @@
+---
+title: Architecture
+description: Architectural Details of the Confidential Containers Project
+weight: 30
+---
+
+
diff --git a/content/en/docs/overview/trust-model/_index.md b/content/en/docs/architecture/trust-model/_index.md
similarity index 100%
rename from content/en/docs/overview/trust-model/_index.md
rename to content/en/docs/architecture/trust-model/_index.md
diff --git a/content/en/docs/overview/trust-model/threats_overview.md b/content/en/docs/architecture/trust-model/threats_overview.md
similarity index 100%
rename from content/en/docs/overview/trust-model/threats_overview.md
rename to content/en/docs/architecture/trust-model/threats_overview.md
diff --git a/content/en/docs/overview/trust-model/trust_model.md b/content/en/docs/architecture/trust-model/trust_model.md
similarity index 100%
rename from content/en/docs/overview/trust-model/trust_model.md
rename to content/en/docs/architecture/trust-model/trust_model.md
diff --git a/content/en/docs/overview/trust-model/trust_model_personas.md b/content/en/docs/architecture/trust-model/trust_model_personas.md
similarity index 100%
rename from content/en/docs/overview/trust-model/trust_model_personas.md
rename to content/en/docs/architecture/trust-model/trust_model_personas.md
diff --git a/content/en/docs/trustee/_index.md b/content/en/docs/attestation/_index.md
similarity index 96%
rename from content/en/docs/trustee/_index.md
rename to content/en/docs/attestation/_index.md
index 5f300e4..ce6ae34 100644
--- a/content/en/docs/trustee/_index.md
+++ b/content/en/docs/attestation/_index.md
@@ -1,12 +1,12 @@
---
-title: Trustee
+title: Attestation
description: Trusted Components for Attestation and Secret Management
weight: 50
categories:
-- docs
+- attestation
tags:
-- docs
- trustee
+- attestation
---
Trustee contains tools and components for attesting confidential guests and providing secrets to them. Collectively, these components are known as Trustee. Trustee typically operates on behalf of the ["workload provider"](../overview/trust-model/trust_model_personas/#workload-provider) / ["data owner"](../overview/trust-model/trust_model_personas/#data-owner) and interacts remotely with [guest components](../guest-components/).
diff --git a/content/en/docs/trustee/attestation-service/_index.md b/content/en/docs/attestation/attestation-service/_index.md
similarity index 100%
rename from content/en/docs/trustee/attestation-service/_index.md
rename to content/en/docs/attestation/attestation-service/_index.md
diff --git a/content/en/docs/trustee/client-tool/_index.md b/content/en/docs/attestation/client-tool/_index.md
similarity index 100%
rename from content/en/docs/trustee/client-tool/_index.md
rename to content/en/docs/attestation/client-tool/_index.md
diff --git a/content/en/docs/trustee/key-broker-service/_index.md b/content/en/docs/attestation/key-broker-service/_index.md
similarity index 100%
rename from content/en/docs/trustee/key-broker-service/_index.md
rename to content/en/docs/attestation/key-broker-service/_index.md
diff --git a/content/en/docs/trustee/key-broker-service/kbs-backed-by-akv.md b/content/en/docs/attestation/key-broker-service/kbs-backed-by-akv.md
similarity index 100%
rename from content/en/docs/trustee/key-broker-service/kbs-backed-by-akv.md
rename to content/en/docs/attestation/key-broker-service/kbs-backed-by-akv.md
diff --git a/content/en/docs/trustee/rvps/_index.md b/content/en/docs/attestation/rvps/_index.md
similarity index 100%
rename from content/en/docs/trustee/rvps/_index.md
rename to content/en/docs/attestation/rvps/_index.md
diff --git a/content/en/docs/cloud-api-adaptor/_index.md b/content/en/docs/cloud-api-adaptor/_index.md
deleted file mode 100644
index 5e9545c..0000000
--- a/content/en/docs/cloud-api-adaptor/_index.md
+++ /dev/null
@@ -1,62 +0,0 @@
----
-title: Cloud API Adaptor
-description: Documentation for Cloud API Adaptor a.k.a Peer Pods
-weight: 54
-categories:
-- docs
-tags:
-- docs
-- caa
----
-
-{{% alert title="Warning" color="warning" %}}
-TODO: This was copied with few adaptations from here:
-This needs to be tested and verified if the instructions still work and needs a rework.
-{{% /alert %}}
-
-## Introduction
-
-This repository contains the implementation of Kata [remote hypervisor](https://github.com/kata-containers/kata-containers/tree/CCv0).
-Kata remote hypervisor enables creation of Kata VMs on any environment without requiring baremetal servers or nested
-virtualization support.
-
-## Goals
-
-* Accept requests from Kata shim to create/delete Kata VM instances without requiring nested virtualization support.
-* Manage VM instances in the cloud to run pods using cloud (virtualization) provider APIs
-* Forward communication between kata shim on a worker node VM and kata agent on a pod VM
-* Provide a mechanism to establish a network tunnel between a worker and pod VMs to Kubernetes pod network
-
-## Architecture
-
-The background and description of the components involved in 'peer pods' can be found in the [architecture documentation](./docs/architecture.md).
-
-## Components
-
-* Cloud API adaptor ([cmd/cloud-api-adaptor](./cmd/cloud-api-adaptor)) - `cloud-api-adator` implements the remote hypervisor support.
-* Agent protocol forwarder ([cmd/agent-protocol-forwarder](./cmd/agent-protocol-forwarder))
-
-## Installation
-
-Please refer to the instructions mentioned in the following [doc](install/README.md).
-
-## Supported Providers
-
-* aws
-* azure
-* ibmcloud
-* libvirt
-* vsphere
-
-### Adding a new provider
-
-Please refer to the instructions mentioned in the following [doc](./docs/addnewprovider.md).
-
-## Contribution
-
-This project uses [the Apache 2.0 license](./LICENSE). Contribution to this project requires the [DCO 1.1](./DCO1.1.txt) process to be followed.
-
-## Collaborations
-
-* Slack: Channel [#confidential-containers-peerpod](https://cloud-native.slack.com/archives/C04A2EJ70BX) on [CNCF](https://communityinviter.com/apps/cloud-native/cncf) slack.
-* Weekly Community [meeting](https://zoom.us/j/94601737867?pwd=MEF5NkN5ZkRDcUtCV09SQllMWWtzUT09) at 14:00 - 15:00 UTC every Wednesday.
diff --git a/content/en/docs/cloud-api-adaptor/aws/_index.md b/content/en/docs/cloud-api-adaptor/aws/_index.md
deleted file mode 100644
index 5309696..0000000
--- a/content/en/docs/cloud-api-adaptor/aws/_index.md
+++ /dev/null
@@ -1,138 +0,0 @@
----
-title: AWS
-description: Documentation for peerpods on AWS
-categories:
-- docs
-tags:
-- docs
-- caa
-- aws
----
-
-{{% alert title="Warning" color="warning" %}}
-TODO: This was copied with few adaptations from here:
-This needs to be tested and verified if the instructions still work and needs a rework.
-{{% /alert %}}
-
-## Prerequisites
-
-- Set `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` for AWS cli access
-
-- Install packer by following the instructions in the following [link](https://learn.hashicorp.com/tutorials/packer/get-started-install-cli)
-
-- Install packer's Amazon plugin `packer plugins install github.com/hashicorp/amazon`
-
-## Build Pod VM Image
-
-### Option-1: Modifying existing marketplace image
-
-- Set environment variables
-
-```
-export AWS_REGION="us-east-1" # mandatory
-export PODVM_DISTRO=rhel # mandatory
-export INSTANCE_TYPE=t3.small # optional, default is t3.small
-export IMAGE_NAME=peer-pod-ami # optional
-export VPC_ID=vpc-01234567890abcdef # optional, otherwise, it creates and uses the default vpc in the specific region
-export SUBNET_ID=subnet-01234567890abcdef # must be set if VPC_ID is set
-```
-
-If you want to change the volume size of the generated AMI, then set the `VOLUME_SIZE` environment variable.
-For example if you want to set the volume size to 40 GiB, then do the following:
-
-```
-export VOLUME_SIZE=40
-```
-
-- Create a custom AWS VM image based on Ubuntu 22.04 having kata-agent and other dependencies
-
-> **NOTE**: For setting up authenticated registry support read this [documentation](../docs/registries-authentication.md).
-
-```
-cd image
-make image
-```
-
-You can also build the custom AMI by running the packer build inside a container:
-
-```
-docker build -t aws \
---secret id=AWS_ACCESS_KEY_ID \
---secret id=AWS_SECRET_ACCESS_KEY \
---build-arg AWS_REGION=${AWS_REGION} \
--f Dockerfile .
-```
-
-If you want to use an existing `VPC_ID` with public `SUBNET_ID` then use the following command:
-
-```
-docker build -t aws \
---secret id=AWS_ACCESS_KEY_ID \
---secret id=AWS_SECRET_ACCESS_KEY \
---build-arg AWS_REGION=${AWS_REGION} \
---build-arg VPC_ID=${VPC_ID} \
---build-arg SUBNET_ID=${SUBNET_ID}\
--f Dockerfile .
-```
-
-If you want to build a CentOS based custom AMI then you'll need to first
-accept the terms by visiting this [link](https://aws.amazon.com/marketplace/pp?sku=bz4vuply68xrif53movwbkpnl)
-
-Once done, run the following command:
-
-```
-docker build -t aws \
---secret id=AWS_ACCESS_KEY_ID \
---secret id=AWS_SECRET_ACCESS_KEY \
---build-arg AWS_REGION=${AWS_REGION} \
---build-arg BINARIES_IMG=quay.io/confidential-containers/podvm-binaries-centos-amd64 \
---build-arg PODVM_DISTRO=centos \
--f Dockerfile .
-```
-
-- Note down your newly created AMI_ID
-
-Once the image creation is complete, you can use the following CLI command as well to
-get the AMI_ID. The command assumes that you are using the default AMI name: `peer-pod-ami`
-
-```
-aws ec2 describe-images --query "Images[*].[ImageId]" --filters "Name=name,Values=peer-pod-ami" --region ${AWS_REGION} --output text
-```
-
-### Option-2: Using precreated QCOW2 image
-
-- Download QCOW2 image
-
-```
-mkdir -p qcow2-img && cd qcow2-img
-
-curl -LO https://raw.githubusercontent.com/confidential-containers/cloud-api-adaptor/staging/podvm/hack/download-image.sh
-
-bash download-image.sh quay.io/confidential-containers/podvm-generic-ubuntu-amd64:latest . -o podvm.qcow2
-
-```
-
-- Convert QCOW2 image to RAW format
-You'll need the `qemu-img` tool for conversion.
-
-```
-qemu-img convert -O raw podvm.qcow2 podvm.raw
-```
-
-- Upload RAW image to S3 and create AMI
-You can use the following helper script to upload the podvm.raw image to S3 and create an AMI
-Note that AWS cli should be configured to use the helper script.
-
-```
-curl -L0 https://raw.githubusercontent.com/confidential-containers/cloud-api-adaptor/staging/aws/raw-to-ami.sh
-
-bash raw-to-ami.sh podvm.raw
-```
-
-On success, the command will generate the `AMI_ID`, which needs to be used to set the value of `PODVM_AMI_ID` in `peer-pods-cm` configmap.
-
-## Running cloud-api-adaptor
-
-- Update [kustomization.yaml](../install/overlays/aws/kustomization.yaml) with your AMI_ID
-
-- Deploy Cloud API Adaptor by following the [install](../install/README.md) guide
diff --git a/content/en/docs/cloud-api-adaptor/azure/_index.md b/content/en/docs/cloud-api-adaptor/azure/_index.md
deleted file mode 100644
index 0fa3204..0000000
--- a/content/en/docs/cloud-api-adaptor/azure/_index.md
+++ /dev/null
@@ -1,537 +0,0 @@
----
-title: Azure
-description: Cloud API Adaptor (CAA) on Azure
-categories:
-- docs
-tags:
-- docs
-- caa
-- azure
----
-
-This documentation will walk you through setting up CAA (a.k.a. Peer Pods) on Azure Kubernetes Service (AKS). It explains how to deploy:
-
-- A single worker node Kubernetes cluster using Azure Kubernetes Service (AKS)
-- CAA on that Kubernetes cluster
-- An Nginx pod backed by CAA pod VM
-
-## Pre-requisites
-
-- Install Azure CLI by following instructions [here](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli).
-- Install kubectl by following the instructions [here](https://kubernetes.io/docs/tasks/tools/#kubectl).
-- Ensure that the tools `curl`, `git`, `jq` and `sipcalc` are installed.
-
-## Azure Preparation
-
-### Azure login
-
-There are a bunch of steps that require you to be logged into your Azure account:
-
-```bash
-az login
-```
-
-Retrieve your subscription ID:
-
-```bash
-export AZURE_SUBSCRIPTION_ID=$(az account show --query id --output tsv)
-```
-
-Set the region:
-
-{{< tabpane text=true right=true persist=header >}}
-
-{{% tab header="AMD SEV-SNP" %}}
-
-```bash
-export AZURE_REGION="eastus"
-```
-
-> **Note:** We selected the `eastus` region as it not only offers AMD SEV-SNP machines but also has prebuilt pod VM images readily available.
-
-{{% /tab %}}
-
-{{% tab header="Intel TDX" %}}
-
-```bash
-export AZURE_REGION="eastus2"
-```
-
-> **Note:** We selected the `eastus2` region as it not only offers Intel TDX machines but also has prebuilt pod VM images readily available.
-
-{{% /tab %}}
-
-{{% tab header="Non-Confidential" %}}
-
-```bash
-export AZURE_REGION="eastus"
-```
-
-> **Note:** We have chose region `eastus` because it has prebuilt pod VM images readily available.
-
-{{% /tab %}}
-{{< /tabpane >}}
-
-### Resource group
-
-> **Note**: Skip this step if you already have a resource group you want to use. Please, export the resource group name in the `AZURE_RESOURCE_GROUP` environment variable.
-
-Create an Azure resource group by running the following command:
-
-```bash
-export AZURE_RESOURCE_GROUP="caa-rg-$(date '+%Y%m%b%d%H%M%S')"
-
-az group create \
- --name "${AZURE_RESOURCE_GROUP}" \
- --location "${AZURE_REGION}"
-```
-
-### Deploy Kubernetes using AKS
-
-Make changes to the following environment variable as you see fit:
-
-```bash
-export CLUSTER_NAME="caa-$(date '+%Y%m%b%d%H%M%S')"
-export AKS_WORKER_USER_NAME="azuser"
-export AKS_RG="${AZURE_RESOURCE_GROUP}-aks"
-export SSH_KEY=~/.ssh/id_rsa.pub
-```
-
-> **Note**: Optionally, deploy the worker nodes into an existing Azure Virtual Network (VNet) and subnet by adding the following flag: `--vnet-subnet-id $MY_SUBNET_ID`.
-
-Deploy AKS with single worker node to the same resource group you created earlier:
-
-```bash
-az aks create \
- --resource-group "${AZURE_RESOURCE_GROUP}" \
- --node-resource-group "${AKS_RG}" \
- --name "${CLUSTER_NAME}" \
- --enable-oidc-issuer \
- --enable-workload-identity \
- --location "${AZURE_REGION}" \
- --node-count 1 \
- --node-vm-size Standard_F4s_v2 \
- --nodepool-labels node.kubernetes.io/worker= \
- --ssh-key-value "${SSH_KEY}" \
- --admin-username "${AKS_WORKER_USER_NAME}" \
- --os-sku Ubuntu
-```
-
-Download kubeconfig locally to access the cluster using `kubectl`:
-
-```bash
-az aks get-credentials \
- --resource-group "${AZURE_RESOURCE_GROUP}" \
- --name "${CLUSTER_NAME}"
-```
-
-### User assigned identity and federated credentials
-
-CAA needs privileges to talk to Azure API. This privilege is granted to CAA by associating a workload identity to the CAA service account. This workload identity (a.k.a. user assigned identity) is given permissions to create VMs, fetch images and join networks in the next step.
-
-> **Note**: If you use an existing AKS cluster it might need to be configured to support workload identity and OpenID Connect (OIDC), please refer to the instructions in [this guide](https://learn.microsoft.com/en-us/azure/aks/workload-identity-deploy-cluster#update-an-existing-aks-cluster).
-
-Start by creating an identity for CAA:
-
-```bash
-export AZURE_WORKLOAD_IDENTITY_NAME="caa-${CLUSTER_NAME}"
-
-az identity create \
- --name "${AZURE_WORKLOAD_IDENTITY_NAME}" \
- --resource-group "${AZURE_RESOURCE_GROUP}" \
- --location "${AZURE_REGION}"
-```
-
-```bash
-export USER_ASSIGNED_CLIENT_ID="$(az identity show \
- --resource-group "${AZURE_RESOURCE_GROUP}" \
- --name "${AZURE_WORKLOAD_IDENTITY_NAME}" \
- --query 'clientId' \
- -otsv)"
-```
-
-### Networking
-
-The VMs that will host Pods will commonly require access to internet services, e.g. to pull images from a public OCI registry. A discrete subnet can be created next to the AKS cluster subnet in the same VNet. We then attach a NAT gateway with a public IP to that subnet:
-
-
-```bash
-export AZURE_VNET_NAME="$(az network vnet list -g ${AKS_RG} --query '[].name' -o tsv)"
-export AKS_CIDR="$(az network vnet show -n $AZURE_VNET_NAME -g $AKS_RG --query "subnets[?name == 'aks-subnet'].addressPrefix" -o tsv)"
-# 10.224.0.0/16
-export MASK="${AKS_CIDR#*/}"
-# 16
-PEERPOD_CIDR="$(sipcalc $AKS_CIDR -n 2 | grep ^Network | grep -v current | cut -d' ' -f2)/${MASK}"
-# 10.225.0.0/16
-az network public-ip create -g "$AKS_RG" -n peerpod
-az network nat gateway create -g "$AKS_RG" -l "$AZURE_REGION" --public-ip-addresses peerpod -n peerpod
-az network vnet subnet create -g "$AKS_RG" --vnet-name "$AZURE_VNET_NAME" --nat-gateway peerpod --address-prefixes "$PEERPOD_CIDR" -n peerpod
-export AZURE_SUBNET_ID="$(az network vnet subnet show -g "$AKS_RG" --vnet-name "$AZURE_VNET_NAME" -n peerpod --query id -o tsv)"
-```
-
-### AKS resource group permissions
-
-For CAA to be able to manage VMs assign the identity VM and Network contributor roles, privileges to spawn VMs in `$AZURE_RESOURCE_GROUP` and attach to a VNet in `$AKS_RG`.
-
-```bash
-az role assignment create \
- --role "Virtual Machine Contributor" \
- --assignee "$USER_ASSIGNED_CLIENT_ID" \
- --scope "/subscriptions/${AZURE_SUBSCRIPTION_ID}/resourcegroups/${AZURE_RESOURCE_GROUP}"
-```
-
-```bash
-az role assignment create \
- --role "Reader" \
- --assignee "$USER_ASSIGNED_CLIENT_ID" \
- --scope "/subscriptions/${AZURE_SUBSCRIPTION_ID}/resourcegroups/${AZURE_RESOURCE_GROUP}"
-```
-
-```bash
-az role assignment create \
- --role "Network Contributor" \
- --assignee "$USER_ASSIGNED_CLIENT_ID" \
- --scope "/subscriptions/${AZURE_SUBSCRIPTION_ID}/resourcegroups/${AKS_RG}"
-```
-
-Create the federated credential for the CAA ServiceAccount using the OIDC endpoint from the AKS cluster:
-
-```bash
-export AKS_OIDC_ISSUER="$(az aks show \
- --name "${CLUSTER_NAME}" \
- --resource-group "${AZURE_RESOURCE_GROUP}" \
- --query "oidcIssuerProfile.issuerUrl" \
- -otsv)"
-```
-
-```bash
-az identity federated-credential create \
- --name "caa-${CLUSTER_NAME}" \
- --identity-name "${AZURE_WORKLOAD_IDENTITY_NAME}" \
- --resource-group "${AZURE_RESOURCE_GROUP}" \
- --issuer "${AKS_OIDC_ISSUER}" \
- --subject system:serviceaccount:confidential-containers-system:cloud-api-adaptor \
- --audience api://AzureADTokenExchange
-```
-
-## Deploy CAA
-
-> **Note**: If you are using Calico Container Network Interface (CNI) on the Kubernetes cluster, then, [configure](https://projectcalico.docs.tigera.io/networking/vxlan-ipip#configure-vxlan-encapsulation-for-all-inter-workload-traffic) Virtual Extensible LAN (VXLAN) encapsulation for all inter workload traffic.
-
-### Download the CAA deployment artifacts
-
-{{< tabpane text=true right=true persist=header >}}
-{{% tab header="**Versions**:" disabled=true /%}}
-
-{{% tab header="Last Release" %}}
-
-```bash
-export CAA_VERSION="0.10.0"
-curl -LO "https://github.com/confidential-containers/cloud-api-adaptor/archive/refs/tags/v${CAA_VERSION}.tar.gz"
-tar -xvzf "v${CAA_VERSION}.tar.gz"
-cd "cloud-api-adaptor-${CAA_VERSION}/src/cloud-api-adaptor"
-```
-
-{{% /tab %}}
-
-{{% tab header="Latest Build" %}}
-
-```bash
-export CAA_BRANCH="main"
-curl -LO "https://github.com/confidential-containers/cloud-api-adaptor/archive/refs/heads/${CAA_BRANCH}.tar.gz"
-tar -xvzf "${CAA_BRANCH}.tar.gz"
-cd "cloud-api-adaptor-${CAA_BRANCH}/src/cloud-api-adaptor"
-```
-
-{{% /tab %}}
-
-{{% tab header="DIY" %}}
-This assumes that you already have the code ready to use. On your terminal change directory to the Cloud API Adaptor's code base.
-{{% /tab %}}
-
-{{< /tabpane >}}
-
-### CAA pod VM image
-
-{{< tabpane text=true right=true persist=header >}}
-{{% tab header="**Versions**:" disabled=true /%}}
-
-{{% tab header="Last Release" %}}
-
-Export this environment variable to use for the peer pod VM:
-
-```bash
-export AZURE_IMAGE_ID="/CommunityGalleries/cococommunity-42d8482d-92cd-415b-b332-7648bd978eff/Images/peerpod-podvm-ubuntu2204-cvm-snp/Versions/${CAA_VERSION}"
-```
-
-{{% /tab %}}
-
-{{% tab header="Latest Build" %}}
-
-An automated job builds the pod VM image each night at 00:00 UTC. You can use that image by exporting the following environment variable:
-
-```bash
-SUCCESS_TIME=$(curl -s \
- -H "Accept: application/vnd.github+json" \
- "https://api.github.com/repos/confidential-containers/cloud-api-adaptor/actions/workflows/azure-podvm-image-nightly-build.yml/runs?status=success" \
- | jq -r '.workflow_runs[0].updated_at')
-
-export AZURE_IMAGE_ID="/CommunityGalleries/cocopodvm-d0e4f35f-5530-4b9c-8596-112487cdea85/Images/podvm_image0/Versions/$(date -u -jf "%Y-%m-%dT%H:%M:%SZ" "$SUCCESS_TIME" "+%Y.%m.%d" 2>/dev/null || date -d "$SUCCESS_TIME" +%Y.%m.%d)"
-```
-
-Above image version is in the format `YYYY.MM.DD`, so to use the latest image should be today's date or yesterday's date.
-
-{{% /tab %}}
-
-{{% tab header="DIY" %}}
-
-If you have made changes to the CAA code that affects the pod VM image and you want to deploy those changes then follow [these instructions](https://github.com/confidential-containers/cloud-api-adaptor/blob/main/src/cloud-api-adaptor/azure/build-image.md) to build the pod VM image. Once image build is finished then export image id to the environment variable `AZURE_IMAGE_ID`.
-
-{{% /tab %}}
-
-{{< /tabpane >}}
-
-### CAA container image
-
-{{< tabpane text=true right=true persist=header >}}
-{{% tab header="**Versions**:" disabled=true /%}}
-
-{{% tab header="Last Release" %}}
-
-Export the following environment variable to use the latest release image of CAA:
-
-```bash
-export CAA_IMAGE="quay.io/confidential-containers/cloud-api-adaptor"
-export CAA_TAG="v0.10.0-amd64"
-```
-
-{{% /tab %}}
-
-{{% tab header="Latest Build" %}}
-
-Export the following environment variable to use the image built by the CAA CI on each merge to main:
-
-```bash
-export CAA_IMAGE="quay.io/confidential-containers/cloud-api-adaptor"
-```
-
-Find an appropriate tag of pre-built image suitable to your needs [here](https://quay.io/repository/confidential-containers/cloud-api-adaptor?tab=tags&tag=latest).
-
-```bash
-export CAA_TAG=""
-```
-
-> **Caution**: You can also use the `latest` tag but it is **not** recommended, because of its lack of version control and potential for unpredictable updates, impacting stability and reproducibility in deployments.
-
-{{% /tab %}}
-
-{{% tab header="DIY" %}}
-
-If you have made changes to the CAA code and you want to deploy those changes then follow [these instructions](https://github.com/confidential-containers/cloud-api-adaptor/blob/main/src/cloud-api-adaptor/install/README.md#building-custom-cloud-api-adaptor-image) to build the container image. Once the image is built export the environment variables `CAA_IMAGE` and `CAA_TAG`.
-
-{{% /tab %}}
-
-{{< /tabpane >}}
-
-### Annotate Service Account
-
-Annotate the CAA Service Account with the workload identity's `CLIENT_ID` and make the CAA DaemonSet use workload identity for authentication:
-
-```yaml
-cat < install/overlays/azure/workload-identity.yaml
-apiVersion: apps/v1
-kind: DaemonSet
-metadata:
- name: cloud-api-adaptor-daemonset
- namespace: confidential-containers-system
-spec:
- template:
- metadata:
- labels:
- azure.workload.identity/use: "true"
----
-apiVersion: v1
-kind: ServiceAccount
-metadata:
- name: cloud-api-adaptor
- namespace: confidential-containers-system
- annotations:
- azure.workload.identity/client-id: "$USER_ASSIGNED_CLIENT_ID"
-EOF
-```
-
-### Select peer-pods machine type
-
-{{< tabpane text=true right=true persist=header >}}
-{{% tab header="AMD SEV-SNP" %}}
-
-```bash
-export AZURE_INSTANCE_SIZE="Standard_DC2as_v5"
-export DISABLECVM="false"
-```
-
-Find more AMD SEV-SNP machine types on [this](https://learn.microsoft.com/en-us/azure/virtual-machines/dasv5-dadsv5-series) Azure documentation.
-
-{{% /tab %}}
-
-{{% tab header="Intel TDX" %}}
-
-```bash
-export AZURE_INSTANCE_SIZE="Standard_DC2es_v5"
-export DISABLECVM="false"
-```
-
-Find more Intel TDX machine types on [this](https://learn.microsoft.com/en-us/azure/virtual-machines/dcesv5-dcedsv5-series) Azure documentation.
-
-{{% /tab %}}
-
-{{% tab header="Non-Confidential" %}}
-
-```bash
-export AZURE_INSTANCE_SIZE="Standard_D2as_v5"
-export DISABLECVM="true"
-```
-
-{{% /tab %}}
-{{< /tabpane >}}
-
-### Populate the `kustomization.yaml` file
-
-Run the following command to update the [`kustomization.yaml`](https://github.com/confidential-containers/cloud-api-adaptor/blob/main/install/overlays/azure/kustomization.yaml) file:
-
-```yaml
-cat < install/overlays/azure/kustomization.yaml
-apiVersion: kustomize.config.k8s.io/v1beta1
-kind: Kustomization
-bases:
-- ../../yamls
-images:
-- name: cloud-api-adaptor
- newName: "${CAA_IMAGE}"
- newTag: "${CAA_TAG}"
-generatorOptions:
- disableNameSuffixHash: true
-configMapGenerator:
-- name: peer-pods-cm
- namespace: confidential-containers-system
- literals:
- - CLOUD_PROVIDER="azure"
- - AZURE_SUBSCRIPTION_ID="${AZURE_SUBSCRIPTION_ID}"
- - AZURE_REGION="${AZURE_REGION}"
- - AZURE_INSTANCE_SIZE="${AZURE_INSTANCE_SIZE}"
- - AZURE_RESOURCE_GROUP="${AZURE_RESOURCE_GROUP}"
- - AZURE_SUBNET_ID="${AZURE_SUBNET_ID}"
- - AZURE_IMAGE_ID="${AZURE_IMAGE_ID}"
- - DISABLECVM="${DISABLECVM}"
-secretGenerator:
-- name: peer-pods-secret
- namespace: confidential-containers-system
-- name: ssh-key-secret
- namespace: confidential-containers-system
- files:
- - id_rsa.pub
-patchesStrategicMerge:
-- workload-identity.yaml
-EOF
-```
-
-The SSH public key should be accessible to the `kustomization.yaml` file:
-
-```bash
-cp $SSH_KEY install/overlays/azure/id_rsa.pub
-```
-
-### Deploy CAA on the Kubernetes cluster
-
-Deploy coco operator:
-
-```bash
-export COCO_OPERATOR_VERSION="0.10.0"
-kubectl apply -k "github.com/confidential-containers/operator/config/release?ref=v${COCO_OPERATOR_VERSION}"
-kubectl apply -k "github.com/confidential-containers/operator/config/samples/ccruntime/peer-pods?ref=v${COCO_OPERATOR_VERSION}"
-```
-
-Run the following command to deploy CAA:
-
-```bash
-kubectl apply -k "install/overlays/azure"
-```
-
-Generic CAA deployment instructions are also described [here](https://github.com/confidential-containers/cloud-api-adaptor/blob/main/install/README.md).
-
-## Run sample application
-
-### Ensure runtimeclass is present
-
-Verify that the `runtimeclass` is created after deploying CAA:
-
-```bash
-kubectl get runtimeclass
-```
-
-Once you can find a `runtimeclass` named `kata-remote` then you can be sure that the deployment was successful. A successful deployment will look like this:
-
-```console
-$ kubectl get runtimeclass
-NAME HANDLER AGE
-kata-remote kata-remote 7m18s
-```
-
-### Deploy workload
-
-Create an `nginx` deployment:
-
-```yaml
-cat < **Note**: If you run into problems then check the troubleshooting guide [here](../troubleshooting/).
-
-## Cleanup
-
-If you wish to clean up the whole set up, you can delete the resource group by running the following command:
-
-```bash
-az group delete \
- --name "${AZURE_RESOURCE_GROUP}" \
- --yes --no-wait
-```
diff --git a/content/en/docs/cloud-api-adaptor/ibm-cloud/_index.md b/content/en/docs/cloud-api-adaptor/ibm-cloud/_index.md
deleted file mode 100644
index 808569b..0000000
--- a/content/en/docs/cloud-api-adaptor/ibm-cloud/_index.md
+++ /dev/null
@@ -1,214 +0,0 @@
----
-title: IBM Cloud
-description: Documentation for peerpods on IBM Cloud
-categories:
-- docs
-tags:
-- docs
-- caa
-- ibm
----
-
-{{% alert title="Warning" color="warning" %}}
-TODO: This was copied with few adaptations from here:
-This needs to be tested and verified if the instructions still work and needs a rework.
-{{% /alert %}}
-
-This guide describes how to set up a demo environment on IBM Cloud for peer pod VMs using the operator deployment approach.
-
-The high level flow involved is:
-
-- Build and upload a peer pod custom image to IBM Cloud
-- Create a 'self-managed' Kubernetes cluster on IBM Cloud provided infrastructure
-- Deploy Confidential-containers operator
-- Deploy and validate that the nginx demo works
-- Clean-up and deprovision
-
-## Pre-reqs
-
-When building the peer pod VM image, it is simplest to use the container based approach, which only requires either
-`docker`, or `podman`, but it can also be built locally.
-
-> **Note:** the peer pod VM image build and upload is de-coupled from the cluster creation and operator deployment stage,
-so can be built on a different machine.
-
-There are a number of packages that you will need to install in order to create the Kubernetes cluster and peer pod enable it:
-
-- Terraform, Ansible, the IBM Cloud CLI and `kubectl` are all required for the cluster creation and explained in
-the [cluster pre-reqs guide](./cluster/README.md#prerequisites).
-
-In addition to this you will need to install [`jq`](https://stedolan.github.io/jq/download/)
-> **Tip:** If you are using Ubuntu linux, you can run follow command:
->
-> ```bash
-> $ sudo apt-get install jq
-> ```
-
-You will also require [go](https://go.dev/doc/install) and `make` to be installed.
-
-## Peer Pod VM Image
-
-A peer pod VM image needs to be created as a VPC custom image in IBM Cloud in order to create the peer pod instances
-from. The peer pod VM image contains components like the agent protocol forwarder and Kata agent that communicate with
-the Kubernetes worker node and carry out the received instructions inside the peer pod.
-
-### Building a Peer Pod VM Image via Docker [Optional]
-
-You may skip this step and use one of the release images, skip to [Import Release VM Image](#import-release-vm-image) but for the latest features you may wish to build your own.
-
-You can do this by following the process [document](../podvm/README.md). If building within a container ensure that `--build-arg CLOUD_PROVIDER=ibmcloud` is set and `--build-arg ARCH=s390x` for an `s390x` architecture image.
-
-> **Note:** At the time of writing issue, [#649](https://github.com/confidential-containers/cloud-api-adaptor/issues/649) means when creating an `s390x` image you also need to add two extra
-build args: `--build-arg UBUNTU_IMAGE_URL=""` and `--build-arg UBUNTU_IMAGE_CHECKSUM=""`
-
-> **Note:** If building the peer pod qcow2 image within a VM, it may take a lot of resources e.g. 8 vCPU and
-32GB RAM due to the nested virtualization performance limitations. When running without enough resources, the failure
-seen is similar to:
->
-> ```
-> Build 'qemu.ubuntu' errored after 5 minutes 57 seconds: Timeout waiting for SSH.
-> ```
-
-#### Upload the built peer pod VM image to IBM Cloud
-
-You can follow the process [documented](./IMPORT_PODVM_TO_VPC.md) from the `cloud-api-adaptor/ibmcloud/image` to extract and upload
-the peer pod image you've just built to IBM Cloud as a custom image, noting to replace the
-`quay.io/confidential-containers/podvm-ibmcloud-ubuntu-s390x` reference with the local container image that you built
-above e.g. `localhost/podvm_ibmcloud_s390x:latest`.
-
-This script will end with the line: `Image with id is available`. The `image-id` field will be
-needed in the kustomize step later.
-
-## Import Release VM Image
-
-Alternatively to use a pre-built peer pod VM image you can follow the process [documented](./IMPORT_PODVM_TO_VPC.md) with the release images found at `quay.io/confidential-containers/podvm-generic-ubuntu-`. Running this command will require docker or podman, as per [tools](./IMPORT_PODVM_TO_VPC.md#tools)
-
-```bash
- ./import.sh quay.io/confidential-containers/podvm-generic-ubuntu-s390x eu-gb --bucket example-bucket --instance example-cos-instance
-```
-
-This script will end with the line: `Image with id is available`. The `image-id` field will be
-needed in later steps.
-
-## Create a 'self-managed' Kubernetes cluster on IBM Cloud provided infrastructure
-
-If you don't have a Kubernetes cluster for testing, you can follow the open-source
-[instructions](./cluster)
- to set up a basic cluster where the Kubernetes nodes run on IBM Cloud provided infrastructure.
-
-## Deploy PeerPod Webhook
-
-#### Deploy cert-manager
-
-- Deploy cert-manager with:
-
- ```
- kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.9.1/cert-manager.yaml
- ```
-
-- Wait for the pods to all be in running state with:
-
- ```
- kubectl get pods -n cert-manager --watch
- ```
-
-#### Deploy the peer-pods webhook
-
-- From within the root directory of the `cloud-api-adaptor` repository, deploy the [webhook](../webhook) with:
-
- ```
- kubectl apply -f ./webhook/hack/webhook-deploy.yaml
- ```
-
-- Wait for the pods to all be in running state with:
-
- ```
- kubectl get pods -n peer-pods-webhook-system --watch
- ```
-
-- Advertise the extended resource `kata.peerpods.io/vm.` by running the following commands:
-
- ```
- pushd webhook/hack/extended-resources
- ./setup.sh
- popd
- ```
-
-## Deploy the Confidential-containers operator
-
-The `caa-provisioner-cli` simplifies deploying the operator and the cloud-api-adaptor resources on to any cluster. See the [test/tools/README.md](../test/tools/README.md) for full instructions. To create an ibmcloud ready version follow these steps
-
-```bash
-# Starting from the cloud-api-adaptor root directory
-pushd test/tools
-make BUILTIN_CLOUD_PROVIDERS="ibmcloud" all
-popd
-```
-
-This will create `caa-provisioner-cli` in the `test/tools` directory. To use the tool with an existing self-managed cluster you will need to setup a `.properties` file containing the relevant ibmcloud information to enable your cluster to create and use peer-pods. Use the following commands to generate the `.properties` file, if not using a selfmanaged cluster please update the `terraform` commands with the appropriate values manually.
-
-```bash
-export IBMCLOUD_API_KEY= # your ibmcloud apikey
-export PODVM_IMAGE_ID= # the image id of the peerpod vm uploaded in the previous step
-export PODVM_INSTANCE_PROFILE= # instance profile name that runs the peerpod (bx2-2x8 or bz2-2x8 for example)
-export CAA_IMAGE_TAG= # cloud-api-adaptor image tag that supports this arch, see quay.io/confidential-containers/cloud-api-adaptor
-pushd ibmcloud/cluster
-
-cat < ../../selfmanaged_cluster.properties
-IBMCLOUD_PROVIDER="ibmcloud"
-APIKEY="$IBMCLOUD_API_KEY"
-PODVM_IMAGE_ID="$PODVM_IMAGE_ID"
-INSTANCE_PROFILE_NAME="$PODVM_INSTANCE_PROFILE"
-CAA_IMAGE_TAG="$CAA_IMAGE_TAG"
-SSH_KEY_ID="$(terraform output --raw ssh_key_id)"
-EOF
-
-popd
-```
-
-This will create a `selfmanaged_cluster.properties` files in the cloud-api-adaptor root directory.
-
-The final step is to run the `caa-provisioner-cli` to install the operator.
-
-```bash
-export CLOUD_PROVIDER=ibmcloud
-# must be run from the directory containing the properties file
-export TEST_PROVISION_FILE="$(pwd)/selfmanaged_cluster.properties"
-# prevent the test from removing the cloud-api-adaptor resources from the cluster
-export TEST_TEARDOWN="no"
-pushd test/tools
-./caa-provisioner-cli -action=install
-popd
-```
-
-## End-2-End Test Framework
-
-To validate that a cluster has been setup properly, there is a suite of tests that validate peer-pods across different providers,
-the implementation of these tests can be found in [test/e2e/common_suite_test.go)](../test/e2e/common_suite_test.go).
-
-Assuming `CLOUD_PROVIDER` and `TEST_PROVISION_FILE` are still set in your current terminal you can execute these tests
-from the cloud-api-adaptor root directory by running the following commands
-
-```bash
-export KUBECONFIG=$(pwd)/ibmcloud/cluster/config
-make test-e2e
-```
-
-## Uninstall and clean up
-
-There are two options for cleaning up the environment once testing has finished, or if you want to re-install from a
-clean state:
-
-- If using a self-managed cluster you can delete the whole cluster following the
-[Delete the cluster documentation](./cluster#delete-the-cluster) and then start again.
-- If you instead just want to leave the cluster, but uninstall the Confidential Containers and peer pods
-feature, you can use the `caa-provisioner-cli` to remove the resources.
-
-```bash
-export CLOUD_PROVIDER=ibmcloud
-# must be run from the directory containing the properties file
-export TEST_PROVISION_FILE="$(pwd)/selfmanaged_cluster.properties"
-pushd test/tools
-./caa-provisioner-cli -action=uninstall
-popd
-```
diff --git a/content/en/docs/cloud-api-adaptor/libvirt/_index.md b/content/en/docs/cloud-api-adaptor/libvirt/_index.md
deleted file mode 100644
index 9b547ab..0000000
--- a/content/en/docs/cloud-api-adaptor/libvirt/_index.md
+++ /dev/null
@@ -1,283 +0,0 @@
----
-title: Libvirt
-description: Documentation for peerpods on Libvirt
----
-
-{{% alert title="Warning" color="warning" %}}
-TODO: This was copied with few adaptations from here:
-This needs to be tested and verified if the instructions still work and needs a rework.
-{{% /alert %}}
-
-## Introduction
-
-This document contains instructions for using, developing and testing the cloud-api-adaptor with [libvirt](https://libvirt.org/).
-
-## Creating an end-to-end environment for testing and development
-
-In this section you will learn how to setup an environment in your local machine to run peer pods with
-the libvirt cloud API adaptor. Bear in mind that many different tools can be used to setup the environment
-and here we just make suggestions of tools that seems used by most of the peer pods developers.
-
-### Requirements
-
-You must have a Linux/KVM system with libvirt installed and the following tools:
-
-- docker (or podman-docker)
-- [kubectl](https://kubernetes.io/docs/reference/kubectl/)
-- [kcli](https://kcli.readthedocs.io/en/latest/)
-
-Assume that you have a 'default' network and storage pools created in libvirtd system instance (`qemu:///system`). However,
-if you have a different pool name then the scripts should be able to handle it properly.
-
-### Create the Kubernetes cluster
-
-Use the [`kcli_cluster.sh`](./kcli_cluster.sh) script to create a simple two VMs (one control plane and one worker) cluster
-with the kcli tool, as:
-
-```bash
-./kcli_cluster.sh create
-```
-
-With `kcli_cluster.sh` you can configure the libvirt network and storage pools that the cluster VMs will be created, among
-other parameters. Run `./kcli_cluster.sh -h` to see the help for further information.
-
-If everything goes well you will be able to see the cluster running after setting your Kubernetes config with:
-
-`export KUBECONFIG=$HOME/.kcli/clusters/peer-pods/auth/kubeconfig`
-
-For example, shown below:
-
-```console
-$ kcli list kube
-+-----------+---------+-----------+-----------------------------------------+
-| Cluster | Type | Plan | Vms |
-+-----------+---------+-----------+-----------------------------------------+
-| peer-pods | generic | peer-pods | peer-pods-ctlplane-0,peer-pods-worker-0 |
-+-----------+---------+-----------+-----------------------------------------+
-
-$ kubectl get nodes
-NAME STATUS ROLES AGE VERSION
-peer-pods-ctlplane-0 Ready control-plane,master 6m8s v1.25.3
-peer-pods-worker-0 Ready worker 2m47s v1.25.3
-```
-
-### Prepare the Pod VM volume
-
-In order to build the Pod VM without installing the build tools, you can use the Dockerfiles hosted on [../podvm](../podvm) directory to run the entire process inside a container. Refer to [podvm/README.md](../podvm/README.md) for further details. Alternatively you can consume pre-built podvm images as explained [here](../docs/consuming-prebuilt-podvm-images.md).
-
-Next you will need to create a volume on libvirt's system storage and upload the image content. That volume is used by
-the cloud-api-adaptor program to instantiate a new Pod VM. Run the following commands:
-
-```bash
-export IMAGE=
-
-virsh -c qemu:///system vol-create-as --pool default --name podvm-base.qcow2 --capacity 20G --allocation 2G --prealloc-metadata --format qcow2
-virsh -c qemu:///system vol-upload --vol podvm-base.qcow2 $IMAGE --pool default --sparse
-```
-
-You should see that the `podvm-base.qcow2` volume was properly created:
-
-```bash
-$ virsh -c qemu:///system vol-info --pool default podvm-base.qcow2
-Name: podvm-base.qcow2
-Type: file
-Capacity: 6.00 GiB
-Allocation: 631.52 MiB
-```
-
-### Install and configure Confidential Containers and cloud-api-adaptor in the cluster
-
-The easiest way to install the cloud-api-adaptor along with Confidential Containers in the cluster is through the
-Kubernetes operator available in the `install` directory of this repository.
-
-Start by creating a public/private RSA key pair that will be used by the cloud-api-provider program, running on the
-cluster workers, to connect with your local libvirtd instance without password authentication. Assume you are in the
-`libvirt` directory, do:
-
-```bash
-cd ../install/overlays/libvirt
-ssh-keygen -f ./id_rsa -N ""
-cat id_rsa.pub >> ~/.ssh/authorized_keys
-```
-
-**Note**: ensure that `~/.ssh/authorized_keys` has the right permissions (read/write for the user only) otherwise the
-authentication can silently fail. You can run `chmod 600 ~/.ssh/authorized_keys` to set the right permissions.
-
-You will need to figure out the IP address of your local host (e.g. 192.168.122.1). Then try to remote connect with
-libvirt to check the keys setup is fine, for example:
-
-```console
-$ virsh -c "qemu+ssh://$USER@192.168.122.1/system?keyfile=$(pwd)/id_rsa" nodeinfo
-CPU model: x86_64
-CPU(s): 12
-CPU frequency: 1084 MHz
-CPU socket(s): 1
-Core(s) per socket: 6
-Thread(s) per core: 2
-NUMA cell(s): 1
-Memory size: 32600636 KiB
-```
-
-Now you should finally install the Kubernetes operator in the cluster with the help of the [`install_operator.sh`](./install_operator.sh) script. Ensure that you have your IP address exported in the environment, as shown below, then run the install script:
-
-```bash
-cd ../../../libvirt/
-export LIBVIRT_IP="192.168.122.1"
-export SSH_KEY_FILE="id_rsa"
-./install_operator.sh
-```
-
-If everything goes well you will be able to see the operator's controller manager and cloud-api-adaptor Pods running:
-
-```console
-$ kubectl get pods -n confidential-containers-system
-NAME READY STATUS RESTARTS AGE
-cc-operator-controller-manager-5df7584679-5dbmr 2/2 Running 0 3m58s
-cloud-api-adaptor-daemonset-vgj2s 1/1 Running 0 3m57s
-
-$ kubectl logs pod/cloud-api-adaptor-daemonset-vgj2s -n confidential-containers-system
-+ exec cloud-api-adaptor libvirt -uri 'qemu+ssh://wmoschet@192.168.122.1/system?no_verify=1' -data-dir /opt/data-dir -pods-dir /run/peerpod/pods -network-name default -pool-name default -socket /run/peerpod/hypervisor.sock
-2022/11/09 18:18:00 [helper/hypervisor] hypervisor config {/run/peerpod/hypervisor.sock registry.k8s.io/pause:3.7 /run/peerpod/pods libvirt}
-2022/11/09 18:18:00 [helper/hypervisor] cloud config {qemu+ssh://wmoschet@192.168.122.1/system?no_verify=1 default default /opt/data-dir}
-2022/11/09 18:18:00 [helper/hypervisor] service config &{qemu+ssh://wmoschet@192.168.122.1/system?no_verify=1 default default /opt/data-dir}
-```
-
-You will also notice that Kubernetes [*runtimeClass*](https://kubernetes.io/docs/concepts/containers/runtime-class/) resources
-were created on the cluster, as for example:
-
-```console
-$ kubectl get runtimeclass
-NAME HANDLER AGE
-kata-remote kata-remote 7m18s
-```
-
-### Create a sample peer-pods pod
-
-At this point everything should be fine to get a sample Pod created. Let's first list the running VMs so that we can later check
-the Pod VM will be really running. Notice below that we got only the cluster node VMs up:
-
-```console
-$ virsh -c qemu:///system list
- Id Name State
-------------------------------------
- 3 peer-pods-ctlplane-0 running
- 4 peer-pods-worker-0 running
-```
-
-Create the *sample_busybox.yaml* file with the following content:
-
-```yaml
-apiVersion: v1
-kind: Pod
-metadata:
- labels:
- run: busybox
- name: busybox
-spec:
- containers:
- - image: quay.io/prometheus/busybox
- name: busybox
- resources: {}
- dnsPolicy: ClusterFirst
- restartPolicy: Never
- runtimeClassName: kata-remote
-```
-
-And create the Pod:
-
-```console
-$ kubectl apply -f sample_busybox.yaml
-pod/busybox created
-
-$ kubectl wait --for=condition=Ready pod/busybox
-pod/busybox condition met
-```
-
-Check that the Pod VM is up and running. See on the following listing that *podvm-busybox-88a70031* was
-created:
-
-```console
-$ virsh -c qemu:///system list
- Id Name State
-----------------------------------------
- 5 peer-pods-ctlplane-0 running
- 6 peer-pods-worker-0 running
- 7 podvm-busybox-88a70031 running
-```
-
-You should also check that the container is running fine. For example, compare the kernels are different as shown below:
-
-```console
-$ uname -r
-5.17.12-100.fc34.x86_64
-
-$ kubectl exec pod/busybox -- uname -r
-5.4.0-131-generic
-```
-
-The peer-pods pod can be deleted as any regular pod. On the listing below the pod was removed and you can note that the
-Pod VM no longer exists on Libvirt:
-
-```console
-$ kubectl delete -f sample_busybox.yaml
-pod "busybox" deleted
-
-$ virsh -c qemu:///system list
- Id Name State
-------------------------------------
- 5 peer-pods-ctlplane-0 running
- 6 peer-pods-worker-0 running
-```
-
-### Delete Confidential Containers and cloud-api-adaptor from the cluster
-
-You might want to reinstall the Confidential Containers and cloud-api-adaptor into your cluster. There are two options:
-
-1. Delete the Kubernetes cluster entirely and start over. In this case you should just run `./kcli_cluster.sh delete` to
- wipe out the cluster created with kcli
-1. Uninstall the operator resources then install them again with the `install_operator.sh` script
-
-Let's show you how to delete the operator resources. On the listing below you can see the actual pods running on
-the *confidential-containers-system* namespace:
-
-```console
-$ kubectl get pods -n confidential-containers-system
-NAME READY STATUS RESTARTS AGE
-cc-operator-controller-manager-fbb5dcf9d-h42nn 2/2 Running 0 20h
-cc-operator-daemon-install-fkkzz 1/1 Running 0 20h
-cloud-api-adaptor-daemonset-libvirt-lxj7v 1/1 Running 0 20h
-```
-
-In order to remove the *\*-daemon-install-\** and *\*-cloud-api-adaptor-daemonset-\** pods, run the following command from the
-root directory:
-
-```bash
-CLOUD_PROVIDER=libvirt make delete
-```
-
-It can take some minutes to get those pods deleted, afterwards you will notice that only the *controller-manager* is
-still up. Below is shown how to delete that pod and associated resources as well:
-
-```console
-$ kubectl get pods -n confidential-containers-system
-NAME READY STATUS RESTARTS AGE
-cc-operator-controller-manager-fbb5dcf9d-h42nn 2/2 Running 0 20h
-
-$ kubectl delete -f install/yamls/deploy.yaml
-namespace "confidential-containers-system" deleted
-serviceaccount "cc-operator-controller-manager" deleted
-role.rbac.authorization.k8s.io "cc-operator-leader-election-role" deleted
-clusterrole.rbac.authorization.k8s.io "cc-operator-manager-role" deleted
-clusterrole.rbac.authorization.k8s.io "cc-operator-metrics-reader" deleted
-clusterrole.rbac.authorization.k8s.io "cc-operator-proxy-role" deleted
-rolebinding.rbac.authorization.k8s.io "cc-operator-leader-election-rolebinding" deleted
-clusterrolebinding.rbac.authorization.k8s.io "cc-operator-manager-rolebinding" deleted
-clusterrolebinding.rbac.authorization.k8s.io "cc-operator-proxy-rolebinding" deleted
-configmap "cc-operator-manager-config" deleted
-service "cc-operator-controller-manager-metrics-service" deleted
-deployment.apps "cc-operator-controller-manager" deleted
-customresourcedefinition.apiextensions.k8s.io "ccruntimes.confidentialcontainers.org" deleted
-
-$ kubectl get pods -n confidential-containers-system
-No resources found in confidential-containers-system namespace.
-```
diff --git a/content/en/docs/cloud-api-adaptor/troubleshooting.md b/content/en/docs/cloud-api-adaptor/troubleshooting.md
deleted file mode 100644
index 4ba417c..0000000
--- a/content/en/docs/cloud-api-adaptor/troubleshooting.md
+++ /dev/null
@@ -1,62 +0,0 @@
----
-title: Cloud API Adaptor Troubleshooting
-description: Generic troubleshooting steps after installation of Cloud API Adaptor
-weight: 1
-categories:
-- docs
-tags:
-- docs
-- caa
----
-
-## Application pod created but it stays in `ContainerCreating` state
-
-Let's start by looking at the pods deployed in the `confidential-containers-system` namespace:
-
-```console
-$ kubectl get pods -n confidential-containers-system -o wide
-NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
-cc-operator-controller-manager-76755f9c96-pjj92 2/2 Running 0 1h 10.244.0.14 aks-nodepool1-22620003-vmss000000
-cc-operator-daemon-install-79c2b 1/1 Running 0 1h 10.244.0.16 aks-nodepool1-22620003-vmss000000
-cc-operator-pre-install-daemon-gsggj 1/1 Running 0 1h 10.244.0.15 aks-nodepool1-22620003-vmss000000
-cloud-api-adaptor-daemonset-2pjbb 1/1 Running 0 1h 10.224.0.4 aks-nodepool1-22620003-vmss000000
-```
-
-It is possible that the `cloud-api-adaptor-daemonset` is not deployed correctly. To see what is wrong with it run the following command and look at the events to get insights:
-
-```console
-$ kubectl -n confidential-containers-system describe ds cloud-api-adaptor-daemonset
-Name: cloud-api-adaptor-daemonset
-Selector: app=cloud-api-adaptor
-Node-Selector: node-role.kubernetes.io/worker=
-...
-Events:
- Type Reason Age From Message
- ---- ------ ---- ---- -------
- Normal SuccessfulCreate 8m13s daemonset-controller Created pod: cloud-api-adaptor-daemonset-2pjbb
-```
-
-But if the `cloud-api-adaptor-daemonset` is up and in the `Running` state, like shown above then look at the pods' logs, for more insights:
-
-```bash
-kubectl -n confidential-containers-system logs daemonset/cloud-api-adaptor-daemonset
-```
-
-> **Note**: This is a single node cluster. So there is only one pod named `cloud-api-adaptor-daemonset-*`. But if you are running on a multi-node cluster then look for the node your workload fails to come up and only see the logs of corresponding CAA pod.
-
-If the problem hints that something is wrong with the configuration then look at the configmaps or secrets needed to run CAA:
-
-```bash
-$ kubectl -n confidential-containers-system get cm
-NAME DATA AGE
-cc-operator-manager-config 1 1h
-kube-root-ca.crt 1 1h
-peer-pods-cm 7 1h
-```
-
-```bash
-$ kubectl -n confidential-containers-system get secret
-NAME TYPE DATA AGE
-peer-pods-secret Opaque 0 1h
-ssh-key-secret Opaque 1 1h
-```
diff --git a/content/en/docs/demos/_index.md b/content/en/docs/demos/_index.md
deleted file mode 100644
index 267f016..0000000
--- a/content/en/docs/demos/_index.md
+++ /dev/null
@@ -1,5 +0,0 @@
----
-title: Demos
-description: Demo you can reproduce to try Confidential Containers
-weight: 2
----
diff --git a/content/en/docs/demos/ccv0-operator-demo.md b/content/en/docs/demos/ccv0-operator-demo.md
deleted file mode 100644
index 0f58be3..0000000
--- a/content/en/docs/demos/ccv0-operator-demo.md
+++ /dev/null
@@ -1,115 +0,0 @@
----
-title: CCv0 Operator Demo
-date: 2023-01-05
-description: >
- The demo shows CCv0 Kata runtime installation and configuration using the coco-operator.
-categories:
-- demo
-tags:
-- coco-operator
-- demo
----
-
-## Demo Video
-
-[Watch the demo in youtube](https://www.youtube.com/watch?v=4cM3IhfnJLQ)
-
-## Demo Environment setup
-
-### Kubernetes cluster
-
-Setup a two nodes Kubernetes cluster using Ubuntu 20.04. You can use your preferred Kubernetes setup tool. Here is an example using [kcli](https://kcli.readthedocs.io/en/latest/).
-
-Download ubuntu 20.04 image if not present by running the following command:
-
-```bash
-kcli download image ubuntu2004
-```
-
-Install the cluster:
-
-```bash
-kcli create kube generic -P image=ubuntu2004 -P workers=1 testk8s
-```
-
-### Replace containerd
-
-Replace containerd on the worker node by building a new containerd from the following branch: [https://github.com/confidential-containers/containerd/tree/CC-main](https://github.com/confidential-containers/containerd/tree/CC-main) ([build instructions](https://github.com/confidential-containers/containerd/blob/CC-main/BUILDING.md))
-
-Modify systemd configuration to use the new binary and restart `containerd` and `kubelet`.
-
-### Verify if the cluster nodes are all up
-
-```bash
-kubectl get nodes
-```
-
-Sample output from the demo environment:
-
-```console
-$ kubectl get nodes
-NAME STATUS ROLES AGE VERSION
-cck8s-demo-master-0 Ready control-plane,master 25d v1.22.3
-cck8s-demo-worker-0 Ready worker 25d v1.22.3
-```
-
-Make sure at least one Kubernetes node in the cluster has the label `node.kubernetes.io/worker=`.
-
-```bash
-kubectl label node $NODENAME node.kubernetes.io/worker=
-```
-
-## Operator Setup
-
-```bash
-RELEASE_VERSION="main"
-kubectl apply -k "github.com/confidential-containers/operator/config/release?ref=${RELEASE_VERSION}"
-```
-
-The operator installs everything under the `confidential-containers-system` namespace:
-
-Verify if the operator is running by running the following command:
-
-```bash
-kubectl get pods -n confidential-containers-system
-```
-
-Sample output from the demo environment:
-
-```console
-$ kubectl get pods -n confidential-containers-system
-NAME READY STATUS RESTARTS AGE
-cc-operator-controller-manager-7f8d6dd988-t9zdm 2/2 Running 0 13s
-```
-
-## Confidential Containers Runtime setup
-
-Creating a `CCruntime` object sets up the container runtime. The default payload image sets up the CCv0 demo image of the kata-containers runtime.
-
-```bash
-RELEASE_VERSION="main"
-kubectl apply -k "github.com/confidential-containers/operator/config/samples/ccruntime/default?ref=${RELEASE_VERSION}"
-```
-
-This will create an install daemonset targeting the worker nodes for installation. You can verify the status under the `confidential-containers-system` namespace.
-
-```console
-$ kubectl get pods -n confidential-containers-system
-NAME READY STATUS RESTARTS AGE
-cc-operator-controller-manager-7f8d6dd988-t9zdm 2/2 Running 0 82s
-cc-operator-daemon-install-p9ntc 1/1 Running 0 45s
-```
-
-On successful installation, you'll see the following `runtimeClasses` being setup:
-
-```console
-$ kubectl get runtimeclasses.node.k8s.io
-NAME HANDLER AGE
-kata kata 92s
-kata-cc kata-cc 92s
-kata-qemu kata-qemu 92s
-```
-
-`kata-cc` runtimeclass uses CCv0 specific configurations.
-
-Now you can deploy the PODs targeting the specific runtimeclasses. The [SSH demo](/docs/demos/ssh-demo) can be used as a compatible workload.
diff --git a/content/en/docs/demos/ssh-demo/includes/Dockerfile b/content/en/docs/demos/ssh-demo/includes/Dockerfile
deleted file mode 100644
index bb2fce5..0000000
--- a/content/en/docs/demos/ssh-demo/includes/Dockerfile
+++ /dev/null
@@ -1,9 +0,0 @@
-FROM alpine:3.14
-RUN apk update && apk upgrade && apk add openssh-server
-RUN ssh-keygen -t ed25519 -f /etc/ssh/ssh_host_ed25519_key -P ""
-# A password needs to be set for login to work. An empty password is
-# unproblematic as password-based login to root is not allowed.
-RUN passwd -d root
-# Generate with `ssh-keygen -t ed25519 -f ccv0-ssh -P "" -C ""`
-COPY ccv0-ssh.pub /root/.ssh/authorized_keys
-ENTRYPOINT /usr/sbin/sshd -D
diff --git a/content/en/docs/demos/ssh-demo/includes/aa-offline_fs_kbc-keys.json b/content/en/docs/demos/ssh-demo/includes/aa-offline_fs_kbc-keys.json
deleted file mode 100644
index 7c7e90b..0000000
--- a/content/en/docs/demos/ssh-demo/includes/aa-offline_fs_kbc-keys.json
+++ /dev/null
@@ -1,3 +0,0 @@
-{
- "default/key/ssh-demo": "HUlOu8NWz8si11OZUzUJMnjiq/iZyHBJZMSD3BaqgMc="
-}
diff --git a/content/en/docs/demos/ssh-demo/includes/ccv0-ssh b/content/en/docs/demos/ssh-demo/includes/ccv0-ssh
deleted file mode 100644
index 0657b74..0000000
--- a/content/en/docs/demos/ssh-demo/includes/ccv0-ssh
+++ /dev/null
@@ -1,7 +0,0 @@
------BEGIN OPENSSH PRIVATE KEY-----
-b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW
-QyNTUxOQAAACAfiGV2X4o+6AgjVBaY/ZR2UvZp84dVYF5bpNZGMLylQwAAAIhawtHJWsLR
-yQAAAAtzc2gtZWQyNTUxOQAAACAfiGV2X4o+6AgjVBaY/ZR2UvZp84dVYF5bpNZGMLylQw
-AAAEAwWYIBvBxQZgk0irFku3Lj1Xbfb8dHtVM/kkz/Uz/l2h+IZXZfij7oCCNUFpj9lHZS
-9mnzh1VgXluk1kYwvKVDAAAAAAECAwQF
------END OPENSSH PRIVATE KEY-----
diff --git a/content/en/docs/demos/ssh-demo/includes/cri-container-config.yaml b/content/en/docs/demos/ssh-demo/includes/cri-container-config.yaml
deleted file mode 100644
index b692af2..0000000
--- a/content/en/docs/demos/ssh-demo/includes/cri-container-config.yaml
+++ /dev/null
@@ -1,4 +0,0 @@
-metadata:
- name: ccv0-ssh
-image:
- image: docker.io/katadocker/ccv0-ssh
diff --git a/content/en/docs/demos/ssh-demo/includes/cri-sandbox-config.yaml b/content/en/docs/demos/ssh-demo/includes/cri-sandbox-config.yaml
deleted file mode 100644
index 8d6ba53..0000000
--- a/content/en/docs/demos/ssh-demo/includes/cri-sandbox-config.yaml
+++ /dev/null
@@ -1,6 +0,0 @@
-metadata:
- name: ccv0-ssh-pod
-hostname: ccv0
-port_mappings:
- - container_port: 22
- host_port: 2222
diff --git a/content/en/docs/demos/ssh-demo/includes/k8s-cc-ssh.yaml b/content/en/docs/demos/ssh-demo/includes/k8s-cc-ssh.yaml
deleted file mode 100644
index 1f75fe8..0000000
--- a/content/en/docs/demos/ssh-demo/includes/k8s-cc-ssh.yaml
+++ /dev/null
@@ -1,28 +0,0 @@
-kind: Service
-apiVersion: v1
-metadata:
- name: ccv0-ssh
-spec:
- selector:
- app: ccv0-ssh
- ports:
- - port: 22
----
-kind: Deployment
-apiVersion: apps/v1
-metadata:
- name: ccv0-ssh
-spec:
- selector:
- matchLabels:
- app: ccv0-ssh
- template:
- metadata:
- labels:
- app: ccv0-ssh
- spec:
- runtimeClassName: kata
- containers:
- - name: ccv0-ssh
- image: ghcr.io/confidential-containers/test-container:multi-arch-encrypted
- imagePullPolicy: Always
diff --git a/content/en/docs/demos/ssh-demo/index.md b/content/en/docs/demos/ssh-demo/index.md
deleted file mode 100644
index cf85371..0000000
--- a/content/en/docs/demos/ssh-demo/index.md
+++ /dev/null
@@ -1,71 +0,0 @@
----
-date: 2022-11-06
-title: SSH Demo
-linkTitle: SSH Demo
-description: >
- SSH Demo to showcase encrypted memory provided by the TEE
-categories:
-- demo
-tags:
-- demo
----
-
-To demonstrate confidential containers capabilities, we run a pod with SSH public key authentication.
-
-Compared to the execution of and login to a shell on a pod, an SSH connection is cryptographically secured and requires a private key. It cannot be established by unauthorized parties, such as someone who controls the node. The container image contains the SSH host key that can be used for impersonating the host we will connect to. Because this container image is encrypted, and the key to decrypting this image is only provided in measurable ways (e.g. attestation or encrypted initrd), and because the pod/guest memory is protected, even someone who controls the node cannot steal this key.
-
-## Using a pre-provided container image
-
-If you would rather build the image with your own keys, skip to [Building the container image](#building-the-container-image). The [operator](/docs/demos/ccv0-operator-demo) can be used to set up a compatible runtime.
-
-A demo image is provided at [docker.io/katadocker/ccv0-ssh](https://hub.docker.com/r/katadocker/ccv0-ssh).
-It is encrypted with [Attestation Agent](https://github.com/confidential-containers/guest-components/tree/main/attestation-agent)'s [offline file system key broker](https://github.com/confidential-containers/guest-components/tree/main/attestation-agent/kbc/src/offline_fs_kbc) and [`aa-offline_fs_kbc-keys.json`](./includes/aa-offline_fs_kbc-keys.json) as its key file.
-The private key for establishing an SSH connection to this container is given in [`ccv0-ssh`](./includes/ccv0-ssh).
-To use it with SSH, its permissions should be adjusted: `chmod 600 ccv0-ssh`.
-The host key fingerprint is `SHA256:wK7uOpqpYQczcgV00fGCh+X97sJL3f6G1Ku4rvlwtR0`.
-
-All keys shown here are for demonstration purposes.
-To achieve actually confidential containers, use a hardware trusted execution environment and **do not** reuse these keys.
-
-Continue at [Connecting to the guest](#connecting-to-the-guest).
-
-## Building the container image
-
-The image built should be encrypted.
-To receive a decryption key at run time, the Confidential Containers project utilizes the [Attestation Agent](https://github.com/confidential-containers/guest-components/tree/main/attestation-agent).
-
-### Generating SSH keys
-
-```bash
-ssh-keygen -t ed25519 -f ccv0-ssh -P "" -C ""
-```
-
-generates an SSH key `ccv0-ssh` and the correspondent public key `ccv0-ssh.pub`.
-
-### Building the image
-
-The provided [`Dockerfile`](./includes/Dockerfile) expects `ccv0-sh.pub` to exist.
-Using Docker, you can build with
-
-```bash
-docker build --progress=plain -t ccv0-ssh .
-```
-
-Alternatively, Buildah can be used (`buildah build` or formerly `buildah bud`).
-The SSH host key fingerprint is displayed during the build.
-
-## Connecting to the guest
-
-A [Kubernetes YAML file](./includes/k8s-cc-ssh.yaml) specifying the [`kata`](https://github.com/kata-containers/kata-containers) runtime is included.
-If you use a [self-built image](#building-the-container-image), you should replace the image specification with the image you built.
-The default tag points to an `amd64` image, an `s390x` tag is also available.
-With common CNI setups, on the same host, with the service running, you can connect via SSH with
-
-```bash
-ssh -i ccv0-ssh root@$(kubectl get service ccv0-ssh -o jsonpath="{.spec.clusterIP}")
-```
-
-You will be prompted about whether the host key fingerprint is correct.
-This fingerprint should match the one specified above/displayed in the Docker build.
-
-`crictl`-compatible [sandbox](./includes/cri-sandbox-config.yaml) and [container](./includes/cri-container-config.yaml) configurations are also included, which forward the pod SSH port (22) to 2222 on the host (use the `-p` flag in SSH).
diff --git a/content/en/docs/features/_index.md b/content/en/docs/features/_index.md
new file mode 100644
index 0000000..91e41e8
--- /dev/null
+++ b/content/en/docs/features/_index.md
@@ -0,0 +1,11 @@
+---
+title: Features
+description: Primitives provided by Confidential Containers
+weight: 40
+---
+
+In addition to running pods inside of enclaves, Confidential Containers
+provides several other features that can be used to protect workloads and data.
+Securing complex workloads often requires using some of these features.
+
+Most features depend on and require attestation, which is described in the next section.
diff --git a/content/en/docs/use-cases/encrypted-images.md b/content/en/docs/features/encrypted-images.md
similarity index 98%
rename from content/en/docs/use-cases/encrypted-images.md
rename to content/en/docs/features/encrypted-images.md
index 5aadcdd..53a1258 100644
--- a/content/en/docs/use-cases/encrypted-images.md
+++ b/content/en/docs/features/encrypted-images.md
@@ -1,13 +1,11 @@
---
-title: Encrypted images
+title: Encrypted Images
date: 2023-01-24
description: Procedures to encrypt and consume OCI images in a TEE
categories:
-- use case
+- feature
tags:
-- coco-keyprovider
- images
-- kbs
---
# Context
diff --git a/content/en/docs/features/protected-storage.md b/content/en/docs/features/protected-storage.md
new file mode 100644
index 0000000..848dc12
--- /dev/null
+++ b/content/en/docs/features/protected-storage.md
@@ -0,0 +1,11 @@
+---
+title: Protected Storage
+date: 2023-01-24
+description: Add protected volumes to a pod
+categories:
+- feature
+tags:
+- storage
+---
+
+TODO
diff --git a/content/en/docs/features/sealed-secrets.md b/content/en/docs/features/sealed-secrets.md
new file mode 100644
index 0000000..f2aa917
--- /dev/null
+++ b/content/en/docs/features/sealed-secrets.md
@@ -0,0 +1,11 @@
+---
+title: Sealed Secrets
+date: 2023-01-24
+description: Generate and deploy protected Kubernetes secrets
+categories:
+- feature
+tags:
+- secrets
+---
+
+TODO
diff --git a/content/en/docs/features/signed-images.md b/content/en/docs/features/signed-images.md
new file mode 100644
index 0000000..cf4da3c
--- /dev/null
+++ b/content/en/docs/features/signed-images.md
@@ -0,0 +1,11 @@
+---
+title: Signed Images
+date: 2023-01-24
+description: Procedures to generate and deploy signed OCI images with CoCo
+categories:
+- feature
+tags:
+- images
+---
+
+TODO
diff --git a/content/en/docs/getting-started/_index.md b/content/en/docs/getting-started/_index.md
new file mode 100644
index 0000000..f1402da
--- /dev/null
+++ b/content/en/docs/getting-started/_index.md
@@ -0,0 +1,7 @@
+---
+title: Getting Started
+description: High level overview of Confidential Containers
+weight: 20
+---
+
+TODO
diff --git a/content/en/docs/guest-components/_index.md b/content/en/docs/guest-components/_index.md
deleted file mode 100644
index 0071822..0000000
--- a/content/en/docs/guest-components/_index.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Guest Components
-description: Confidential Container Tools and Components
-weight: 51
-categories:
-- docs
-tags:
-- docs
-- guest-components
----
-
-{{% alert title="Warning" color="warning" %}}
-TODO: This was copied with few adaptations from here:
-This needs to be tested and verified if the instructions still work and needs a rework.
-{{% /alert %}}
-
-[![FOSSA Status](https://app.fossa.com/api/projects/git%2Bgithub.com%2Fconfidential-containers%2Fimage-rs.svg?type=shield)](https://app.fossa.com/projects/git%2Bgithub.com%2Fconfidential-containers%2Fimage-rs?ref=badge_shield)
-
-This repository includes tools and components for confidential container images.
-
-- [Attestation Agent](attestation-agent): An agent for facilitating attestation protocols. Can be built as a library to run in a process-based enclave or built as a process that runs inside a confidential vm.
-
-- [image-rs](image-rs): Rust implementation of the container image management library.
-
-- [ocicrypt-rs](ocicrypt-rs): Rust implementation of the OCI image encryption library.
-
-- api-server-rest](api-server-rest): CoCo Restful API server.
-
-## License
-
-[![FOSSA Status](https://app.fossa.com/api/projects/git%2Bgithub.com%2Fconfidential-containers%2Fimage-rs.svg?type=large)](https://app.fossa.com/projects/git%2Bgithub.com%2Fconfidential-containers%2Fimage-rs?ref=badge_large)
diff --git a/content/en/docs/guest-components/api-server-rest/_index.md b/content/en/docs/guest-components/api-server-rest/_index.md
deleted file mode 100644
index 5a93a4d..0000000
--- a/content/en/docs/guest-components/api-server-rest/_index.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: API Server Rest
-description: Documentation for CoCo Restful API Server
-categories:
-- docs
-tags:
-- docs
-- guest-components
----
-
-{{% alert title="Warning" color="warning" %}}
-TODO: This was copied with few adaptations from here:
-This needs to be tested and verified if the instructions still work and needs a rework.
-{{% /alert %}}
-
-CoCo guest components use lightweight ttRPC for internal communication to reduce the memory footprint and dependency. But many internal services also needed by containers like `get_resource`, `get_evidence` and `get_token`, we export these services with restful API, now CoCo containers can easy access these API with http client. Here are some examples, for detail info, please refer [rest API](./openapi/api.json)
-
-```console
-$ ./api-server-rest --features=all
-Starting API server on 127.0.0.1:8006
-API Server listening on http://127.0.0.1:8006
-```
-
-```console
-$ curl http://127.0.0.1:8006/cdh/resource/default/key/1
-12345678901234567890123456xxxx
-```
-
-```console
-$ curl http://127.0.0.1:8006/aa/evidence\?runtime_data\=xxxx
-{"svn":"1","report_data":"eHh4eA=="}
-```
-
-```console
-$ curl http://127.0.0.1:8006/aa/token\?token_type\=kbs
-{"token":"eyJhbGciOiJFi...","tee_keypair":"-----BEGIN... "}
-```
diff --git a/content/en/docs/guest-components/attestation-agent/_index.md b/content/en/docs/guest-components/attestation-agent/_index.md
deleted file mode 100644
index c2b8428..0000000
--- a/content/en/docs/guest-components/attestation-agent/_index.md
+++ /dev/null
@@ -1,152 +0,0 @@
----
-title: Attestation Agent
-description: Documentation for Attestation Agent
-categories:
-- docs
-tags:
-- docs
-- guest-components
----
-
-{{% alert title="Warning" color="warning" %}}
-TODO: This was copied with few adaptations from here:
-This needs to be tested and verified if the instructions still work and needs a rework.
-{{% /alert %}}
-
-Attestation Agent (AA for short) is a service function set for attestation procedure
-in Confidential Containers. It provides kinds of service APIs that need to make
-requests to the Relying Party (Key Broker Service) in Confidential Containers,
-and performs an attestation and establishes connection between the Key Broker Client (KBC)
-and corresponding KBS, so as to obtain the trusted services or resources of KBS.
-
-Current consumers of AA include:
-
-- [ocicrypt-rs](../ocicrypt-rs)
-- [image-rs](../image-rs)
-
-## Components
-
-The main body of AA is a rust library crate, which contains KBC modules used to communicate
-with various KBS. In addition, this project also provides a gRPC service application,
-which allows callers to call the services provided by AA through gRPC.
-
-## Library crate
-
-Import AA in `Cargo.toml` of your project with specific KBC(s):
-
-```toml
-attestation-agent = { git = "https://github.com/confidential-containers/guest-components", features = ["sample_kbc"] }
-```
-
-**Note**: When the version is stable, we will release AA on .
-
-## gRPC Application
-
-Here are the steps of building and running gRPC application of AA:
-
-### Build
-
-Build and install with default KBC modules:
-
-```shell
-git clone https://github.com/confidential-containers/guest-components
-cd guest-components/attestation-agent
-make && make install
-```
-
-or explicitly specify the KBS modules it contains. Taking `sample_kbc` as example:
-
-```shell
-make KBC=sample_kbc
-```
-
-#### Musl
-
-To build and install with musl, just run:
-
-```shell
-make LIBC=musl && make install
-```
-
-#### Openssl support
-
-To build and install with openssl support (which is helpful in specific machines like `s390x`)
-
-```
-make OPENSSL=1 && make install
-```
-
-### Run
-
-For help information, just run:
-
-```shell
-attestation-agent --help
-```
-
-Start AA and specify the endpoint of AA's gRPC service:
-
-```shell
-attestation-agent --keyprovider_sock 127.0.0.1:50000 --getresource_sock 127.0.0.1:50001
-```
-
-Or start AA with default keyprovider address (127.0.0.1:50000) and default getresource address (127.0.0.1:50001):
-
-```
-attestation-agent
-```
-
-If you want to see the runtime log:
-
-```
-RUST_LOG=attestation_agent attestation-agent --keyprovider_sock 127.0.0.1:50000 --getresource_sock 127.0.0.1:50001
-```
-
-### ttRPC
-
-To build and install ttRPC Attestation Agent, just run:
-
-```shell
-make ttrpc=true && make install
-```
-
-ttRPC AA now only support Unix Socket, for example:
-
-```shell
-attestation-agent --keyprovider_sock unix:///tmp/keyprovider.sock --getresource_sock unix:///tmp/getresource.sock
-```
-
-## Supported KBC modules
-
-AA provides a flexible KBC module mechanism to support different KBS protocols required to make the communication between KBC and KBS. If the KBC modules currently supported by AA cannot meet your use requirement (e.g, need to use a new KBS protocol), you can write a new KBC module complying with the KBC development [GUIDE](docs/kbc_module_development_guide.md). Welcome to contribute new KBC module to this project!
-
-List of supported KBC modules:
-
-| KBC module name | README | KBS protocol | Maintainer |
-|-----------------|-------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------|---------------------------|
-| sample_kbc | Null | Null | Attestation Agent Authors |
-| offline_fs_kbc | [Offline file system KBC](kbc/src/offline_fs_kbc/README.md) | Null | IBM |
-| eaa_kbc | [EAA KBC](kbc/src/eaa_kbc/README.md) | EAA protocol | Alibaba Cloud |
-| offline_sev_kbc | [Offline SEV KBC](kbc/src/offline_sev_kbc/README.md) | Null | IBM |
-| online_sev_kbc | [Online SEV KBC](kbc/src/online_sev_kbc/README.md) | simple-kbs | IBM |
-| cc_kbc | [CC KBC](kbc/src/cc_kbc/README.md) | [CoCo KBS protocol](https://github.com/confidential-containers/kbs/blob/main/docs/kbs_attestation_protocol.md) | CoCo Community |
-
-### CC KBC
-
-CC KBC supports different kinds of hardware TEE attesters, now
-| Attester name | Info |
-| ------------------- | -------------------------- |
-| tdx-attester | Intel TDX |
-| sgx-attester | Intel SGX DCAP |
-| snp-attester | AMD SEV-SNP |
-| az-snp-vtpm-attester| Azure SEV-SNP CVM |
-
-To build cc kbc with all available attesters and install, use
-
-```shell
-make KBC=cc_kbc && make install
-```
-
-## Tools
-
-- [Sample Keyprovider](./coco_keyprovider): A simple tool for encrypting container images with skopeo, please refer to its [README](./coco_keyprovider/README.md).
diff --git a/content/en/docs/guest-components/confidential-data-hub/_index.md b/content/en/docs/guest-components/confidential-data-hub/_index.md
deleted file mode 100644
index 745606b..0000000
--- a/content/en/docs/guest-components/confidential-data-hub/_index.md
+++ /dev/null
@@ -1,54 +0,0 @@
----
-title: Confidential Data Hub
-description: Documentation for Confidential Data Hub
-categories:
-- docs
-tags:
-- docs
-- guest-components
----
-
-{{% alert title="Warning" color="warning" %}}
-TODO: This was copied with few adaptations from here:
-This needs to be tested and verified if the instructions still work and needs a rework.
-{{% /alert %}}
-
-Confidential Data Hub is a service running inside guest to provide resource related
-APIs.
-
-### Build
-
-Build and install with default KBC modules:
-
-```shell
-git clone https://github.com/confidential-containers/guest-components
-cd guest-components/confidential-data-hub
-make
-```
-
-or explicitly specify the confidential resource provider and KMS plugin, please refer to
-[Supported Features](#supported-features)
-
-```shell
-make RESOURCE_PROVIDER=kbs PROVIDER=aliyun
-```
-
-### Supported Features
-
-Confidential resource providers (flag `RESOURCE_PROVIDER`)
-
-| Feature name | Note |
-| ------------------- | ----------------------------------------------------------------- |
-| kbs | For TDX/SNP/Azure-SNP-vTPM based on KBS Attestation Protocol |
-| sev | For SEV based on efi secret pre-attestation |
-
-Note: `offline-fs` is built-in, we do not need to manually enable. If no `RESOURCE_PROVIDER`
-is given, all features will be enabled.
-
-KMS plugins (flag `PROVIDER`)
-
-| Feature name | Note |
-| ------------------- | ----------------------------------------------------------------- |
-| aliyun | Use aliyun KMS suites to unseal secrets, etc. |
-
-Note: If no `PROVIDER` is given, all features will be enabled.
diff --git a/content/en/docs/guest-components/image-rs/_index.md b/content/en/docs/guest-components/image-rs/_index.md
deleted file mode 100644
index 701977b..0000000
--- a/content/en/docs/guest-components/image-rs/_index.md
+++ /dev/null
@@ -1,22 +0,0 @@
----
-title: image-rs
-description: Documentation for image-rs
-categories:
-- docs
-tags:
-- docs
-- guest-components
----
-
-{{% alert title="Warning" color="warning" %}}
-TODO: This was copied with few adaptations from here:
-This needs to be tested and verified if the instructions still work and needs a rework.
-{{% /alert %}}
-
-Container Images Rust Crate
-
-## Documentation
-
-[Design document](docs/design.md)
-
-[CCv1 Image Security Design document](docs/ccv1_image_security_design.md)
diff --git a/content/en/docs/guest-components/ocicrypt-rs/_index.md b/content/en/docs/guest-components/ocicrypt-rs/_index.md
deleted file mode 100644
index 80364d6..0000000
--- a/content/en/docs/guest-components/ocicrypt-rs/_index.md
+++ /dev/null
@@ -1,18 +0,0 @@
----
-title: ocicrypt-rs
-description: Documentation for ocicrypt-rs
-categories:
-- docs
-tags:
-- docs
-- guest-components
----
-
-{{% alert title="Warning" color="warning" %}}
-TODO: This was copied with few adaptations from here:
-This needs to be tested and verified if the instructions still work and needs a rework.
-{{% /alert %}}
-
-# ocicrypt-rs
-
-This repo contains the rust version of the [containers/ocicrypt](https://github.com/containers/ocicrypt) library.
diff --git a/content/en/docs/kata-containers/_index.md b/content/en/docs/kata-containers/_index.md
deleted file mode 100644
index 16ee2b3..0000000
--- a/content/en/docs/kata-containers/_index.md
+++ /dev/null
@@ -1,15 +0,0 @@
----
-title: Kata Containers
-description: Documentation for Kata Containers pertaining to Confidential Containers
-weight: 50
-categories:
-- docs
-tags:
-- docs
-- kata-cc
----
-
-
-{{% alert title="Warning" color="warning" %}}
-TODO: Add some information here.
-{{% /alert %}}
diff --git a/content/en/docs/kata-containers/kata-confidential-containers.md b/content/en/docs/kata-containers/kata-confidential-containers.md
deleted file mode 100644
index f1e6678..0000000
--- a/content/en/docs/kata-containers/kata-confidential-containers.md
+++ /dev/null
@@ -1,171 +0,0 @@
----
-title: Kata Confidential Containers
-date: 2023-10-12
-description: >
- Documentation about Kata Containers in the context of Confidential Computing
-categories:
-- docs
-tags:
-- docs
-- kata-cc
----
-
-{{% alert title="Warning" color="warning" %}}
-TODO: This page was copied from the kata-containers repo, this needs to be tailored for Confidential Containers
-{{% /alert %}}
-
-
-
-[![CI | Publish Kata Containers payload](https://github.com/kata-containers/kata-containers/actions/workflows/payload-after-push.yaml/badge.svg)](https://github.com/kata-containers/kata-containers/actions/workflows/payload-after-push.yaml) [![Kata Containers Nightly CI](https://github.com/kata-containers/kata-containers/actions/workflows/ci-nightly.yaml/badge.svg)](https://github.com/kata-containers/kata-containers/actions/workflows/ci-nightly.yaml)
-
-Welcome to Kata Containers!
-
-This repository is the home of the Kata Containers code for the 2.0 and newer
-releases.
-
-If you want to learn about Kata Containers, visit the main
-[Kata Containers website](https://katacontainers.io).
-
-## Introduction
-
-Kata Containers is an open source project and community working to build a
-standard implementation of lightweight Virtual Machines (VMs) that feel and
-perform like containers, but provide the workload isolation and security
-advantages of VMs.
-
-## License
-
-The code is licensed under the Apache 2.0 license.
-See [the license file](LICENSE) for further details.
-
-## Platform support
-
-Kata Containers currently runs on 64-bit systems supporting the following
-technologies:
-
-| Architecture | Virtualization technology |
-|-|-|
-| `x86_64`, `amd64` | [Intel](https://www.intel.com) VT-x, AMD SVM |
-| `aarch64` ("`arm64`")| [ARM](https://www.arm.com) Hyp |
-| `ppc64le` | [IBM](https://www.ibm.com) Power |
-| `s390x` | [IBM](https://www.ibm.com) Z & LinuxONE SIE |
-
-### Hardware requirements
-
-The [Kata Containers runtime](src/runtime) provides a command to
-determine if your host system is capable of running and creating a
-Kata Container:
-
-```bash
-kata-runtime check
-```
-
-> **Notes:**
->
-> - This command runs a number of checks including connecting to the
-> network to determine if a newer release of Kata Containers is
-> available on GitHub. If you do not wish this to check to run, add
-> the `--no-network-checks` option.
->
-> - By default, only a brief success / failure message is printed.
-> If more details are needed, the `--verbose` flag can be used to display the
-> list of all the checks performed.
->
-> - If the command is run as the `root` user additional checks are
-> run (including checking if another incompatible hypervisor is running).
-> When running as `root`, network checks are automatically disabled.
-
-## Getting started
-
-See the [installation documentation](docs/install).
-
-## Documentation
-
-See the [official documentation](docs) including:
-
-- [Installation guides](docs/install)
-- [Developer guide](docs/Developer-Guide.md)
-- [Design documents](docs/design)
- - [Architecture overview](docs/design/architecture)
- - [Architecture 3.0 overview](docs/design/architecture_3.0/)
-
-## Configuration
-
-Kata Containers uses a single
-[configuration file](src/runtime/README.md#configuration)
-which contains a number of sections for various parts of the Kata
-Containers system including the [runtime](src/runtime), the
-[agent](src/agent) and the [hypervisor](#hypervisors).
-
-## Hypervisors
-
-See the [hypervisors document](docs/hypervisors.md) and the
-[Hypervisor specific configuration details](src/runtime/README.md#hypervisor-specific-configuration).
-
-## Community
-
-To learn more about the project, its community and governance, see the
-[community repository](https://github.com/kata-containers/community). This is
-the first place to go if you wish to contribute to the project.
-
-## Getting help
-
-See the [community](#community) section for ways to contact us.
-
-### Raising issues
-
-Please raise an issue
-[in this repository](https://github.com/kata-containers/kata-containers/issues).
-
-> **Note:**
-> If you are reporting a security issue, please follow the [vulnerability reporting process](https://github.com/kata-containers/community#vulnerability-handling)
-
-## Developers
-
-See the [developer guide](docs/Developer-Guide.md).
-
-### Components
-
-### Main components
-
-The table below lists the core parts of the project:
-
-| Component | Type | Description |
-|-|-|-|
-| [runtime](src/runtime) | core | Main component run by a container manager and providing a containerd shimv2 runtime implementation. |
-| [runtime-rs](src/runtime-rs) | core | The Rust version runtime. |
-| [agent](src/agent) | core | Management process running inside the virtual machine / POD that sets up the container environment. |
-| [`dragonball`](src/dragonball) | core | An optional built-in VMM brings out-of-the-box Kata Containers experience with optimizations on container workloads |
-| [documentation](docs) | documentation | Documentation common to all components (such as design and install documentation). |
-| [tests](https://github.com/kata-containers/tests) | tests | Excludes unit tests which live with the main code. |
-
-### Additional components
-
-The table below lists the remaining parts of the project:
-
-| Component | Type | Description |
-|-|-|-|
-| [packaging](tools/packaging) | infrastructure | Scripts and metadata for producing packaged binaries
(components, hypervisors, kernel and rootfs). |
-| [kernel](https://www.kernel.org) | kernel | Linux kernel used by the hypervisor to boot the guest image. Patches are stored [here](tools/packaging/kernel). |
-| [osbuilder](tools/osbuilder) | infrastructure | Tool to create "mini O/S" rootfs and initrd images and kernel for the hypervisor. |
-| [kata-debug](tools/packaging/kata-debug/README.md) | infrastructure | Utility tool to gather Kata Containers debug information from Kubernetes clusters. |
-| [`agent-ctl`](src/tools/agent-ctl) | utility | Tool that provides low-level access for testing the agent. |
-| [`kata-ctl`](src/tools/kata-ctl) | utility | Tool that provides advanced commands and debug facilities. |
-| [`log-parser-rs`](src/tools/log-parser-rs) | utility | Tool that aid in analyzing logs from the kata runtime. |
-| [`trace-forwarder`](src/tools/trace-forwarder) | utility | Agent tracing helper. |
-| [`runk`](src/tools/runk) | utility | Standard OCI container runtime based on the agent. |
-| [`ci`](https://github.com/kata-containers/ci) | CI | Continuous Integration configuration files and scripts. |
-| [`katacontainers.io`](https://github.com/kata-containers/www.katacontainers.io) | Source for the [`katacontainers.io`](https://www.katacontainers.io) site. |
-
-### Packaging and releases
-
-Kata Containers is now
-[available natively for most distributions](docs/install/README.md#packaged-installation-methods).
-
-## Metrics tests
-
-See the [metrics documentation](tests/metrics/README.md).
-
-## Glossary of Terms
-
-See the [glossary of terms](https://github.com/kata-containers/kata-containers/wiki/Glossary) related to Kata Containers.
diff --git a/content/en/docs/overview/_index.md b/content/en/docs/overview/_index.md
index 71546ee..35e899f 100644
--- a/content/en/docs/overview/_index.md
+++ b/content/en/docs/overview/_index.md
@@ -4,8 +4,13 @@ description: High level overview of Confidential Containers
weight: 1
---
+Confidential Containers encapsulates pods inside of confidential virtual machines,
+allowing Cloud Native workloads to leverage confidential computing hardware
+with minimal modification.
+Confidential Containers extends the guarantees of confidential computing to complex workloads.
+With Confidential Containers, sensitive workloads can be run on untrusted hosts
+and protected from compromised or malicious users, software, and administrators.
-{{% alert title="Warning" color="warning" %}}
-TODO: Add highlevel overview of Confidential Containers.
-{{% /alert %}}
+Confidential Containers provides an end-to-end framework for deploying workloads,
+attesting them, and provisioning secrets.
diff --git a/content/en/docs/troubleshooting/_index.md b/content/en/docs/troubleshooting/_index.md
new file mode 100644
index 0000000..42a5172
--- /dev/null
+++ b/content/en/docs/troubleshooting/_index.md
@@ -0,0 +1,7 @@
+---
+title: Troubleshooting
+description: Recovering from misconfigurations and bugs
+weight: 60
+---
+
+The [troubleshooting guide](https://github.com/confidential-containers/confidential-containers/blob/main/guides/troubleshooting.md) is currently in GitHub.
diff --git a/content/en/docs/use-cases/_index.md b/content/en/docs/use-cases/_index.md
deleted file mode 100644
index cba661d..0000000
--- a/content/en/docs/use-cases/_index.md
+++ /dev/null
@@ -1,5 +0,0 @@
----
-title: Use cases
-description: Depiction of typical Confidential Container use cases and how they can be addressed using the project's tools.
-weight: 2
----