diff --git a/blog/2022-07-22-sg3w-is-in-town.md b/blog/2022-07-22-sg3w-is-in-town.md new file mode 100644 index 0000000..644b128 --- /dev/null +++ b/blog/2022-07-22-sg3w-is-in-town.md @@ -0,0 +1,26 @@ +--- +title: Contain your excitement, the s3gw is in town +description: We've developed a standalone RADOS Gateway. Welcome to the s3gw Project. +slug: s3gw-rados-gateway-standalone +authors: + - name: The s3gw team +tags: [blog] +hide_table_of_contents: false +--- + +The Aquarist Labs team is back with an open source and cloud-native S3 service. After spending months investigating a storage appliance built on Ceph, the team identified the need to complement the Rancher storage portfolio with their unique skillsets and lessons learned from their time developing Aquarium. + + + +Introducing the [s3gw project][1], an S3-compatible gateway running a standalone [RADOS Gateway (RGW)][2] implementation, using a non-RADOS backend for standalone usage. This new project provides the required infrastructure to build a container able to run on a Kubernetes cluster with S3-compatible endpoints to applications. + +This is the first publicly available iteration of the s3gw project. We expect (and welcome!) bugs and we know our performance is not optimal (yet!). This release is meant for testing and feedback gathering and it is not recommended for production use. See our release notes for more details. + +## Call to action + +We would love to hear from you about what you'd like to see on our roadmap. What would enable you best to use s3gw in your environment? + +Reach out to us at  or our [Slack channel](https://aquaristlabs.slack.com/archives/C03RFG0BES0). You can also join [our mailing list](https://lists.suse.com/mailman/listinfo/s3gw) or have a look at our [GitHub repository](https://github.com/aquarist-labs/s3gw) -- feature requests are welcome! 🙂 + +[1]:https://github.com/aquarist-labs/s3gw +[2]:https://docs.ceph.com/en/quincy/radosgw/ diff --git a/blog/2022-07-28-pv-s3-access.md b/blog/2022-07-28-pv-s3-access.md new file mode 100644 index 0000000..08fcdac --- /dev/null +++ b/blog/2022-07-28-pv-s3-access.md @@ -0,0 +1,283 @@ +--- +title: Does your PV need S3 access? We’ve got you covered +description: In a cloud-native environment, it is important to offer storage systems that can interact with clients using a standard protocol. +slug: does-your-pv-need-s3-access +authors: + - name: The s3gw team +tags: [blog, s3gw, Rancher, Longhorn] +hide_table_of_contents: false +--- + +Increased demand for cloud storage solutions has become a crucial topic in recent years: companies are requiring data to be made more readily available for their cloud-native applications. + + + +Increased demand for cloud storage solutions has become a crucial topic in recent years: companies are requiring data to be made more readily available for their cloud-native applications. + +In a cloud-native environment, it is important to offer storage systems that can interact with clients using a standard protocol. + +## Simple Storage Service + +![S3 logo](https://www.suse.com/c/wp-content/uploads/2022/07/s3.png) + +[Simple Storage Service,](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) or S3, is a protocol designed by Amazon that launched in the United States market in 2006. S3 is a vast protocol that covers key concepts such as *buckets*, *objects*, *keys*, *versioning*, *ACLs* and *regions*. + +To continue with this article, you need to know that the S3 API can be invoked with REST calls and that you can just store your objects inside holders called buckets. For more information, there are other resources available online. + +##  K3s and Rancher + +This article explores the use of [K3s](https://k3s.io/) and [Rancher](https://rancher.com/) as foundations for experiments with an S3 gateway. + +![K3S logo](https://www.suse.com/c/wp-content/uploads/2022/05/k3s-icon-color-1-300x245.png) + +K3s is a lightweight Kubernetes distribution that runs smoothly on Edge low resource devices. Rancher is a graphical user manager that simplifies the underlying complexity of a Kubernetes cluster. + +![Rancher logo](https://www.suse.com/c/wp-content/uploads/2022/05/rancher-logo-cow-black.png) + +With Rancher, you can manage a cluster in a user-friendly fashion, regardless of the Kubernetes version being used. + +## Longhorn + +![Longhorn logo](https://www.suse.com/c/wp-content/uploads/2022/07/longhorn.png) + +A Kubernetes cluster and manager alone are not sufficient when dealing with cloud storage. You could use the primitive resources offered by a standard Kubernetes cluster such as the basic persistent volume types, but we recommend installing a component that takes care of providing your pods with some advanced storage resource. + +It is desirable to have a system that can take care of your data securely and in a redundant fasion and expose volumes with the standard Kubernetes interfaces. [Longhorn](https://longhorn.io/) is the right system for this kind of need. Built from scratch to work natively with Kubernetes, Longhorn allows pods to obtain highly available persistent volumes. The portion of storage managed by Longhorn is replicated so that a hardware failure does not compromise user's data. + +## S3 Gateway + +![S3GW logo](https://www.suse.com/c/wp-content/uploads/2022/07/logo-s3gw-300x256.png) + +Having Longhorn deployed on your cluster allows the persistent volumes to be consumed by internal applications deployed on Kubernetes. If you want to give access to the data to external clients, you need an S3 gateway. + +External clients can store and read data to and from the cluster using the S3 API. For this role, we are going to employ [s3gw.](https://aquarist-labs.io/s3gw/) + +s3gw is being developed on the foundations of [Ceph](https://ceph.com/en/) S3 gateway: radosgw. Even if s3gw is still in its infancy and still in an early stage of development, it can already be used to test and play with S3 functionalities. + +## Let's start cooking ingredients + +Now that you have identified all the pieces, you are ready to start building your environment.\ +For this tutorial, we are installing K3s on an OpenSUSE Linux OS. For the sake of simplicity, because Kubernetes needs to have certain networking resources available, it can be worth completely disabling the system firewall. + +If you prefer to keep your firewall on, have a look [here](https://rancher.com/docs/k3s/latest/en/installation/installation-requirements/#networking). + +### Stop Firewall + +From a shell prompt, run the following command: + +```bash +sudo systemctl stop firewalld.service +``` + +### Install K3s + +From a shell prompt, run the following command: + +```bash +$ curl -sfL | INSTALL_K3S_VERSION=v1.23.9+k3s1 sh - +``` + +After installation has terminated, you can check that the cluster is running with: + +```bash +$ sudo kubectl get nodes +``` + +If everything is ok, you should see something similar: + +```bash +NAME STATUS ROLES AGE VERSION +suse Ready control-plane,master 56s v1.23.8+k3s1 +``` + +If you prefer using K3s with your regular user and not as root, you can run: + +```bash +$ sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config && chown $USER ~/.kube/config && chmod 600 ~/.kube/config && export KUBECONFIG=~/.kube/config +``` + +After this, you will be able to operate on K3s with your user. + +### Install Helm + +![Helm logo](https://www.suse.com/c/wp-content/uploads/2022/07/helm-260x300.png) + +We are going to install Rancher using an [Helm](https://helm.sh/) chart, so you must first install Helm on the system: + +```bash +$ sudo zypper install helm +``` + +### Deploy Rancher + +Let's begin to install Rancher adding the its latest repo to Helm: + +```bash +$ helm repo add rancher-latest +``` + +After this, you must define a new Kubernetes namespace where Rancher will install: + +```bash +$ kubectl create namespace cattle-system +``` + +As the official [documentation](https://rancher.com/docs/rancher/v2.6/en/installation/install-rancher-on-k8s/#install-the-rancher-helm-chart) dictates, this must be named *cattle-system.* + +Because Rancher management server is designed to be secure by default and requires SSL/TLS configuration you must deploy some additional resource: + +```bash +$ kubectl apply -f + +$ helm repo add jetstack + +$ helm install cert-manager jetstack/cert-manager\ + --namespace cert-manager\ + --create-namespace\ + --version v1.7.1 +``` + +Let's check that the cert-manager has successfully deployed and the related pods are running: + +```bash +$ kubectl get pods --namespace cert-manager +``` + +```bash +NAME READY STATUS RESTARTS AGE +cert-manager-5c6866597-zw7kh 1/1 Running 0 2m +cert-manager-cainjector-577f6d9fd7-tr77l 1/1 Running 0 2m +cert-manager-webhook-787858fcdb-nlzsq 1/1 Running 0 2m +``` + +Now, you are required to define an hostname in /etc/hosts pointing to the ip address of one of the host's physical interfaces, for example: + +10.0.0.2 rancher.local + +After this, you can finally launch the Rancher installation command: + +```bash +helm install rancher rancher-latest/rancher\ + --namespace cattle-system\ + --set hostname=rancher.local\ + --set bootstrapPassword=admin +``` + +When Rancher's pods have booted up,  with your browser you can navigate to:  and complete the initial setup: + +![Screenshot 1](https://www.suse.com/c/wp-content/uploads/2022/07/rancher1-1024x943.png) + +Once you have completed the step, and saved the password, you can start exploring your local cluster with the graphical manager: + +![Screenshot 2](https://www.suse.com/c/wp-content/uploads/2022/07/rancher.local_-1024x614.png) + +Depending on what you have deployed on the cluster, you could see more or less resource consumption. + +### Deploy Longhorn + +You can now deploy Longhorn using the Charts chooser under the Apps section on the left of the Rancher's dashboard: + +![](https://www.suse.com/c/wp-content/uploads/2022/07/rancher3-1024x727.png) + +The installation is pretty straightforward and you don't need to change any default value of the chart.\ +In the end, if everything has gone well, you should see the Rancher's console showing: + +![Screenshot 3](https://www.suse.com/c/wp-content/uploads/2022/07/ranch-longh-install-1024x596.png) + +After Longhorn has been installed, you can simply click on the Longhorn entry on the left Rancher's menu to be redirected to the Longhorn's dashboard: + +![](https://www.suse.com/c/wp-content/uploads/2022/07/long-dash-1024x651.png) + +A freshly installation of Longhorn shows that still no application is using a persistent volume. + +### Deploy s3gw + +You are now ready to add the last ingredient to your system: s3gw, the S3 gateway.\ +Rancher does not bundle with the [Helm repository of s3gw](https://github.com/aquarist-labs/s3gw-charts) set by default, so you must add it from the dashboard: + +![Screenshot 4](https://www.suse.com/c/wp-content/uploads/2022/07/s3gw-repo-add-1024x608.png) + +You can choose an unique name: for example: s3gw.\ +For the Target field you have to choose Git repository and the repository URL: + + + +In the Git Branch field put the latest available release: + +v0.2.0 + +You can now click on Create button.\ +On Apps section, you can now find the s3gw chart: + +![Screenshot 5](https://www.suse.com/c/wp-content/uploads/2022/07/s3gw-chart-1024x630.png) + +So let's proceed with installation; you can choose a namespace and a name for s3gw: + +![Screenshot 6](https://www.suse.com/c/wp-content/uploads/2022/07/s3gw-ns1-1024x657.png) + +You don't need to customize the chart, so you can leave the bottom checkbox unchecked.\ +Once the installation completed, if everything has gone well, you should see the Rancher's console showing: + +![Screenshot 7](https://www.suse.com/c/wp-content/uploads/2022/07/s3gw-installed-1024x774.png) + +On Longhorn dashboard, you can verify that the application is using a Longhorn persistent volume: + +![Screenshot 8](https://www.suse.com/c/wp-content/uploads/2022/07/lh-s3gw-1024x480.png) + +### Test the S3 gateway + +By default the s3gw chart configures an ingress resource pointing to the S3 gateway with the FQDN: s3gw.local .\ +Thus, you must define s3gw.local in /etc/hosts pointing to the ip address of one of the host's physical interfaces, for example: + +10.0.0.2 s3gw.local + +For testing the S3 gateway you can rely on [s3cmd](https://github.com/s3tools/s3cmd) that is a popular command line S3 client.\ +You can install it choosing a method listed [here](https://github.com/s3tools/s3cmd/blob/master/INSTALL.md).\ +Once you have installed it, you can take the s3cmd configuration file from [here](https://raw.githubusercontent.com/aquarist-labs/s3gw-core/main/env/s3cmd.cfg) and use that as it is against s3gw.\ +All you need to to is to create a directory, put s3cmd.cfg inside that and finally invoke s3cmd. + +#### Create a bucket + +```bash +$ s3cmd -c s3cmd.cfg mb s3://foo +``` + +#### Put some objects in the bucket + +Let's create a 1mb file filled with random data and put it in the  bucket: + +```bash +$ dd if=/dev/random bs=1k count=1k of=obj.1mb.bin +$ s3cmd -c s3cmd.cfg put obj.1mb.bin s3://foo +``` + +Let's create a 10mb file filled with random data and put it in the bucket: + +```bash +$ dd if=/dev/random bs=1k count=10k of=obj.10mb.bin +$ s3cmd -c s3cmd.cfg put obj.10mb.bin s3://foo +``` + +#### List objects contained in a bucket + +```bash +$ s3cmd -c s3cmd.cfg ls s3://foo +2022-07-26 15:03 10485760 s3://foo/obj.10mb.bin +2022-07-26 15:01 1048576 s3://foo/obj.1mb.bin +``` + +#### Delete an object + +```bash +$ s3cmd -c s3cmd.cfg rm s3://foo/obj.10mb.bin +``` + +### In summary + +In this tutorial, you've seen how to set up a K3s cluster, manage it with Rancher, install Longhorn and finally enrich the system with a S3 gateway. K3s, Rancher and Longhorn are powerful tools to set up an environment providing resilient and performing storage capabilities. If you need to expose the storage to external clients, you can choose to install s3gw with a near zero effort. + +## Call to action + +We would love to hear from you about what you'd like to see on our roadmap. What would enable you best to use s3gw in your environment?  + +Reach out to us at  or our [Slack channel](https://aquaristlabs.slack.com/archives/C03RFG0BES0). You can also join [our mailing list](https://lists.suse.com/mailman/listinfo/s3gw) or have a look at our [GitHub repository](https://github.com/aquarist-labs/s3gw) -- feature requests are welcome! 🙂 diff --git a/blog/2022-11-21-introduction-to-s3gw.md b/blog/2022-11-21-introduction-to-s3gw.md new file mode 100644 index 0000000..51dd575 --- /dev/null +++ b/blog/2022-11-21-introduction-to-s3gw.md @@ -0,0 +1,55 @@ +--- +title: Introduction to s3gw +description: Introductory blog post on the s3gw Project, a standalone S3 service based in the RADOS Gateway project. +slug: introduction-to-s3gw +authors: + - name: The s3gw team +tags: [blog, s3gw, Rancher, introduction] +hide_table_of_contents: false +--- + +# What is s3gw? + +s3gw is an S3-compatible service focused on deployments in a Kubernetes environment backed by any PVC, including [Longhorn](https://longhorn.io/). Since its inception, the primary focus has been on cloud native deployments. However, the s3gw can be deployed in a myriad of scenarios, provided some form of storage is attached. + + + +s3gw is based on Ceph's RADOSGW (RGW) but runs as a stand--alone service without the RADOS cluster and relies on a storage backend still under heavy development by the storage team at SUSE. The s3gw team is also developing a web-based UI for management and an object explorer.  + +## The s3gw service + +Distributed as a small container, the s3gw service runs RGW and exposes an S3-compatible API. Instead of requiring a full Ceph cluster deployment, we leverage RGW's standalone capabilities and keep data on a local storage volume. Although the focus is on running within a Kubernetes environment with on-premise storage provided to the container, s3gw can consume any storage type with a filesystem on it. This can be PVC in Kubernetes or a local directory on a development machine.  + +As the container consumes the storage volume, the object data is kept in a hash tree of directories and the metadata is kept in an SQLite database. This allows us to leverage the ACID properties of SQLite to ensure the state is committed atomically while keeping large blobs of data on the filesystem and away from SQLite's path.  + +In the future, we will release a blog post describing the s3gw service's data store in more depth.  + +However, it should be noted that we don't support all RGW's S3 APIs. Some components are still under development and other features not yet included that are required for proper operation.  + +For example, while deleting objects and buckets is currently supported, we don't support lifecycle management. Lifecycle policies and IAM are some of the things that we will be working on shortly.  + +## The s3gw web UI + +Also distributed as a small container, the s3gw web UI provides an intuitive way of interacting with the s3gw service. This includes user and bucket management, as well as an object explorer.  + +We have a few screenshots of the current UI version below, but please keep in mind that we are still actively developing it and it is not feature-complete. + +![s3gw login page screenshot](https://www.suse.com/c/wp-content/uploads/2022/11/s3gw-login-1-1024x747.png) +![s3gw dashboard screenshot](https://www.suse.com/c/wp-content/uploads/2022/11/s3gw-dashboard-1-1024x747.png) +![s3gw bucket creation](https://www.suse.com/c/wp-content/uploads/2022/11/Screenshot-2022-11-21-at-15.49.46-1024x751.png) +![s3gw bucket list dashboard screeenshot](https://www.suse.com/c/wp-content/uploads/2022/11/s3gw-list-buckets-1024x747.png) +![s3gw file explorer](https://www.suse.com/c/wp-content/uploads/2022/11/s3gw-file-explorer-1-1024x747.png) + +##  Installing + +You may find our Helm chart helpful if you have a Kubernetes cluster, whichever flavor that might be. You'll be able to find it on [ArtifactHub](https://artifacthub.io/packages/helm/s3gw/s3gw), and our [documentation](https://s3gw-docs.readthedocs.io/en/latest/helm-charts/) can provide important insights as to available configuration values.  + +Alternatively, if you are using Rancher, you may find s3gw available in the Partner repository, as depicted below: + +![Partner repository](https://www.suse.com/c/wp-content/uploads/2022/11/Screenshot-2022-11-21-at-16.04.15-1024x372.png) + +## Call for action + +We would love to hear from you about what you'd like to see on our roadmap. What would enable you best to use s3gw in your environment?  + +Reach out to us at  or our [Slack channel](https://aquaristlabs.slack.com/archives/C03RFG0BES0). You can also join [our mailing list](https://lists.suse.com/mailman/listinfo/s3gw) or have a look at our [GitHub repository](https://github.com/aquarist-labs/s3gw) -- feature requests are welcome! 🙂 diff --git a/blog/2023-01-25-deploy-s3gw-digital-ocean.md b/blog/2023-01-25-deploy-s3gw-digital-ocean.md new file mode 100644 index 0000000..d010cc2 --- /dev/null +++ b/blog/2023-01-25-deploy-s3gw-digital-ocean.md @@ -0,0 +1,147 @@ +--- +title: Deploy s3gw in Digital Ocean +description: In this tutorial, we will walk through the setup of a single-node K3s Kubernetes cluster with Rancher, together with the S3 Gateway (s3gw) and a Longhorn PV (Persistent Volume). +slug: deploy-s3gw-digital-ocean +authors: + - name: The s3gw team +tags: [blog, s3gw, Rancher, Digital Ocean] +hide_table_of_contents: false +--- + +## Introduction + +In this tutorial, we will walk through the setup of a single-node K3s Kubernetes cluster with Rancher, together with the S3 Gateway (s3gw) and a Longhorn PV (Persistent Volume). This guide will use Digital Ocean, but these instructions will likely work with other cloud providers as well.  + + + +## Background + +Before you begin, if you have not heard yet of the [s3gw](https://s3gw.io/) project, read this article first. The s3gw is a lightweight S3 service for Kubernetes users running on top of a Longhorn PV (and it comes with a nice user interface). + +For the purpose of this article, there is assumed knowledge of [K3s](http://k3s.io/), [Rancher](https://www.rancher.com/) and [Longhorn](http://longhorn.io/).  However, if you need more information, you will find plenty of useful information in this blog. + +## Prerequisites + +You will need to have created a Droplet in Digital Ocean. For this specific tutorial, you will need the following:  + +- OS: Ubuntu 20.04 (LTS) x64  +- CPU Options: Regular Intel with SSD + 8 GB / 4 CPUs   +- Add block storage: Leave as it is  +- Datacenter region: Choose the datacenter region closer to you  +- VPC Network: Leave as it is  +- Authentication: via SSH, click the "new SSH Key", follow the instructions given (identify your ssh key properly) and after it is added, select it with the appropriate checkbox.  +- Additional Options: Leave as it is  +- Finalize and create: identify your droplet with a hostname (ex: `-local-rancher`). +- Hit Create Droplet  + +And there you go! You have a system ready to hack on! + +## Prepare your system  + +Now we need to set up your droplet. Install Helm: + +```bash +$ ssh root@IP-ADDRESS +$ apt-get install open-iscsi +$ snap install --classic helm +``` + +1\. Install K3s  + +Now, set up K3s:  + +```bash +$ curl -sfL [https://get.k3s.io](https://get.k3s.io/) | INSTALL_K3S_VERSION="v1.24.7+k3s1" sh -s - server --cluster-init  +$ export KUBECONFIG=/etc/rancher/k3s/k3s.yaml   +``` + +2\. Define a Kubernetes namespace  + +Now, we need to define a Kubernetes namespace where the resources created by the chart should be installed:  + +```bash +$ kubectl create namespace cattle-system  +``` + +3\. Set up certificate management  + +Next, set up cert-manager:  + +```bash +$ kubectl apply -f   + +$ helm repo add jetstack [https://charts.jetstack.io](https://charts.jetstack.io/) +$ helm repo update +$ helm install cert-manager jetstack/cert-manager\ +  --namespace cert-manager\ +  --create-namespace\ +  --version v1.7.1  +``` + +4\. Install Rancher server  + +Once you are done installing K3s, install Rancher through the helm chart:  + +```bash +$ helm repo add rancher-latest   + +$ helm install rancher rancher-latest/rancher\ +  --namespace cattle-system\ +  --set hostname= IP-ADDRESS[.sslip.io](http://164.92.168.210.sslip.io/)\ +  --set replicas=1\ +  --set bootstrapPassword=PASSWORD  +``` + +We are using sslip.io as the DNS service. The installation will take some time. Then you will be ready to access Rancher:  + +![Rancher login page](https://www.suse.com/c/wp-content/uploads/2023/01/article-rancher-1-1024x740.png) + +5\. Retrieve the password  + +Retrieve your password by running the following command:  + +```bash +$ kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{ .data.bootstrapPassword|base64decode}}{{ "\n" }}' +``` + +## Install s3gw using the Rancher UI + +The s3gw can be found in the Rancher UI as a partner chart:  + +To access, go to "Apps" and then "Charts". Choose the partner charts drop-down and click on the s3gw partner chart:  + +![Rancher partner charts](https://www.suse.com/c/wp-content/uploads/2023/01/s3gw-partner-chart-1-1024x740.png) + +Click  "Install" and tick the "Customize Helm options before install":  + +![s3gw-install](https://www.suse.com/c/wp-content/uploads/2023/01/s3gw-install-1024x740.png) + +Here are the remaining three steps:  + +1. Set App Metadata: Select the project to install s3gw into.  +2. Values: Here, you can set up access and secret keys, storage, etc. The chart sets up a Longhorn volume by default.  Update are the hostnames for the S3 service and the UI:  + +![s3gw installation values](https://www.suse.com/c/wp-content/uploads/2023/01/s3gw-install-values-1024x779.png) + +3\. Finally, there are other options you can also set up additional deployment options: + +![Helm options](https://www.suse.com/c/wp-content/uploads/2023/01/s3gw-helm-opts-1024x653.png) + +And that's it! You can now access the s3gw UI in https://s3gw-ui.your.ip.here.sslip.io) : + +![s3gw login](https://www.suse.com/c/wp-content/uploads/2023/01/s3gw-login-1024x747.png) + +⚠️ "Network failure" issue. + +When you try to log into the UI for the first time, you will find a "Network Failure" error. This is a [known issue](https://github.com/aquarist-labs/s3gw/issues/275).  + +To work around this issue, access the S3 service URL first ([https://s3gw.your.ip.here.sslip.io](https://s3gw.your.ip.here.sslip.io/)). You will then be able to log into the UI: + +![s3gw-file-explorer](https://www.suse.com/c/wp-content/uploads/2023/01/s3gw-file-explorer-1024x747.png) + + +## Call to action + +We would love to hear from you about what you'd like to see on our roadmap. What would enable you best to use s3gw in your environment?  + +Reach out to us at  or our [Slack channel](https://aquaristlabs.slack.com/archives/C03RFG0BES0). You can also join [our mailing list](https://lists.suse.com/mailman/listinfo/s3gw) or have a look at our [GitHub repository](https://github.com/aquarist-labs/s3gw) -- feature requests are welcome! 🙂 diff --git a/blog/2023-03-01-epinio-meets-s3gw.md b/blog/2023-03-01-epinio-meets-s3gw.md new file mode 100644 index 0000000..754502e --- /dev/null +++ b/blog/2023-03-01-epinio-meets-s3gw.md @@ -0,0 +1,77 @@ +--- +title: Epinio meets s3gw +description: This blog post explains how to set up the s3gw object service with the Epinio project +slug: epinio-meets-s3gw +authors: + - name: The s3gw team +tags: [blog, s3gw, epinio] +hide_table_of_contents: false +--- + +Since the very first version, [Epinio](https://epinio.io/) has made use of an internal S3 endpoint to store the user's projects in the form of aggregated tarballs. + + + +Those objects are then downloaded and staged by the internal engine's pipeline and, finally, they are deployed into the Kubernetes cluster as consumable applications.  + +Epinio makes use of S3 as an internal private service. In this scenario, S3 can be thought of as an internal ephemeral cache with the purpose of storing temporary objects. For these needs, advanced redundancy measures are not necessary. Should the S3 backend experience a failure of any kind, software, or hardware; there would be no data loss, since Epinio has the ability to reconstruct the data of a project at any time.  + +Prior to 1.7.0, Epinio could use only [Minio](https://min.io/) as S3 service. Starting from that version, we have since enabled the chart to also use the S3 Gateway (s3gw) project.  + +[s3gw](https://s3gw.io/) is a lightweight s3-compatible service that can be backed by any PVC within a Kubernetes environment, with a preference for [Longhorn](https://longhorn.io/) as the backing service. As said before, since we do not need an advanced redundancy strategy with Epinio, we can safely rely on a PVC provided by the default storage class deployed on the cluster.  + +If you are installing Epinio through the Rancher UI, enable the *Customize Helm options before install* checkbox:  + +![Screenshot 1](https://www.suse.com/c/wp-content/uploads/2023/02/epinio_s3gw_1-1024x732.png) + +In the next page, click on the *S3 storage* section and disable the *Install Minio *checkbox*.*\ +You can now enable the *Install s3gw* checkbox.  + +![Screenshot 1](https://www.suse.com/c/wp-content/uploads/2023/02/epinio_s3gw_2-1024x732.png) + +This is the simplest way to make Epinio work with s3gw.  + +For a more advanced customization, you can edit the Epinio chart's values.yaml file: + +```bash +s3gw:  + +  enabled: false  + +  ingress:  + +    enabled: false  + +  serviceName: s3gw  + +  storageClass:  + +    create: false  + +    name: ''  + +  storageSize: 2Gi  + +  ui:  + +    enabled: false  + +  useExistingSecret: true  +``` + +If you want, you can, for example, change the s3gw.storageClass.name used by s3gw to create its PVC to persist the data. Leaving this field empty makes the s3gw use the default storage class in the cluster. Furthermore, you can set the s3gw.storageSize value to the appropriate size based on your need.  + +Embedding s3gw inside Epinio has been beneficial for both projects because it significantly evolved the respective ability to integrate with other technologies. + +Thanks to this, s3gw has made huge progress in areas such as TLS management and chart consistency. + +s3gw was born in 2022 and that year we defined the project's foundations. As for 2023, the team has ambitious plans that will bring the project to a brand-new level. We are already confident the tool is good enough to start integrating with vibrant projects like Epinio and are planning to make s3gw even simpler to integrate in the future.  + +Stay tuned! + + +## Call to action + +We would love to hear from you about what you'd like to see on our roadmap. What would enable you best to use s3gw in your environment?  + +Reach out to us at  or our [Slack channel](https://aquaristlabs.slack.com/archives/C03RFG0BES0). You can also join [our mailing list](https://lists.suse.com/mailman/listinfo/s3gw) or have a look at our [GitHub repository](https://github.com/aquarist-labs/s3gw) -- feature requests are welcome! 🙂 diff --git a/blog/2023-03-28-release-notes-0.14.md b/blog/2023-03-28-release-notes-0.14.md new file mode 100644 index 0000000..c5bd9f8 --- /dev/null +++ b/blog/2023-03-28-release-notes-0.14.md @@ -0,0 +1,47 @@ +--- +title: Release Notes - v0.14.0 +description: Release notes for v.0.14.0 +slug: release-notes-v0.14 +authors: + - name: The s3gw team +tags: [release-notes] +hide_table_of_contents: false +--- + + +# Release Notes - v0.14.0 + +This release adds lifecycle management, object locks (legal holds) and an +updated version of the radosgw we use for the backend. + +This release is meant for testing and feedback gathering. It is not recommended +for production use. + + + +Should a bug be found and not expected to be related to the list below, one +should feel encouraged to file an issue in our +[GitHub repository](https://github.com/aquarist-labs/s3gw/issues/new/choose). + +## Features + +- SFS: Initial lifecycle management support +- SFS: Object Lock - Legal holds +- SFS: Metadata database: Add indices to often queried columns +- SFS: Simplify write state machine. Remove _writing_ object state. + Writes no longer need to update the object state during IO. +- SFS: Update radosgw to Ceph Upstream 0e2e7d594b8 +- UI: Display object data more intuitively +- UI: Enhance user key management page +- UI: Add button to copy the current path of the object browser to the clipboard +- UI: Lifecycle management + +## Fixes + +## Breaking Changes + +- On-disk format for the metadata store changed + +## Known Issues + +No known issues diff --git a/blog/2023-04-28-release-notes-0.15.md b/blog/2023-04-28-release-notes-0.15.md new file mode 100644 index 0000000..068be82 --- /dev/null +++ b/blog/2023-04-28-release-notes-0.15.md @@ -0,0 +1,50 @@ +--- +title: Release Notes - v0.15.0 +description: Release notes for v.0.15.0 +slug: release-notes-v0.15 +authors: + - name: The s3gw team +tags: [release-notes] +hide_table_of_contents: false +--- + +# Release Notes - v0.15.0 + +This release focuses on stabilizing our continuous integration and release process. +In this context, we have also addressed a number of issues that was affecting our +testing framework when automatically triggered by CI. + + + +Although this activity may not result in any direct user-facing improvements, +it plays a crucial role in maintaining a stable environment for the upcoming major +enhancements that the s3gw team is currently developing. + +We continue to address the regular issues that affect all of s3gw's components. + +This release is meant for testing and feedback gathering. It is not recommended +for production use. + +Should a bug be found and not expected to be related to the list below, one +should feel encouraged to file an issue in our +[GitHub repository](https://github.com/aquarist-labs/s3gw/issues/new/choose). + +## Features + +- SFS: Improve error handling and robustness of non-multipart PUT operations. +- SFS: Telemetry: the backend now periodically exchanges data with our upgrade responder. +- UI: Add tags support for objects. + +## Fixes + +- CI: Various fixes focused on the stabilization and the consistency of the process. +- Tests: Various fixes related with the integration with both the CI and the + release process. + +## Breaking Changes + +- None + +## Known Issues + +- SFS: Non-versioned GETs may observe dirty data of concurrent non-multipart PUTs. diff --git a/blog/2023-05-11-release-notes-0.16.md b/blog/2023-05-11-release-notes-0.16.md new file mode 100644 index 0000000..0d86b02 --- /dev/null +++ b/blog/2023-05-11-release-notes-0.16.md @@ -0,0 +1,51 @@ +--- +title: Release Notes - v0.16.0 +description: Release notes for v.0.16.0 +slug: release-notes-v0.16 +authors: + - name: The s3gw team +tags: [release-notes] +hide_table_of_contents: false +--- + +# Release Notes - v0.16.0 + +This release cycle focused on architecture adjustments to the s3gw service's +backend store (SFS), which will be reflected on upcoming releases. + + + +Most noteworthy outcome of this release is the initial COSI support for s3gw. +This can be enabled via the Helm Chart. + + +We have also disabled user and bucket quotas via the UI. Quotas are currently +not supported by the s3gw service, and have been kept in the UI to demonstrate +what we believe to be the right approach to them. As the backend development +progresses, quotas will be re-enabled when the right time comes. + +This release is meant for testing and feedback gathering. It is not recommended +for production use. + +Should a bug be found and not expected to be related to the list below, one +should feel encouraged to file an issue in our +[GitHub repository](https://github.com/aquarist-labs/s3gw/issues/new/choose). + +## Features + +- Kubernetes: Add experimental COSI support. +- UI: Add new experimental python backend for the UI. +- UI: Disable bucket and user quotas in the UI. + +## Fixes + +- None + +## Breaking Changes + +- None + +## Known Issues + +- SFS: Non-versioned GETs may observe dirty data of concurrent non-multipart + PUTs. diff --git a/blog/2023-06-19-release-notes-0.17.md b/blog/2023-06-19-release-notes-0.17.md new file mode 100644 index 0000000..352a7f1 --- /dev/null +++ b/blog/2023-06-19-release-notes-0.17.md @@ -0,0 +1,48 @@ +--- +title: Release Notes - v0.17.0 +description: Release notes for v.0.17.0 +slug: release-notes-v0.17 +authors: + - name: The s3gw team +tags: [release-notes] +hide_table_of_contents: false +--- + +# Release Notes - v0.17.0 + +This release contains a number of changes to the internal data structures and +metadata schema in preparation for a more streamlined versioning and multipart +implementation. In addition to that, the UI received a number of bug fixes, +quality of life improvements and a stylistic overhaul, including the logo and +colorscheme. The UI also received a large number of end-to-end tests as well as +an update to the Angular version. + + + +This release is meant for testing and feedback gathering. It is not recommended +for production use. + +Should a bug be found and not expected to be related to the list below, one +should feel encouraged to file an issue in our +[GitHub repository](https://github.com/aquarist-labs/s3gw/issues/new/choose). + +## Features + +- UI: Branding Support (#552) +- UI: Upgrade to Angular 15 (#513) +- UI: Adapt logo and style (#530) +- UI: Various improvements + +## Fixes + +- UI: Fix incorrect pagination when using search/filters (#559) +- UI: Fix search function only searching a single page (#556) +- UI: Fix redundant 'clear' buttons for search (#554) +- UI: Fix objects with delete markers being displayed (#548) +- Chart: Fix "unsupported protocol" bug for the COSI driver (#511) + +## Breaking Changes + +- On-disk format for the metadata store changed + +## Known Issues diff --git a/blog/2023-07-06-release-notes-0.18.md b/blog/2023-07-06-release-notes-0.18.md new file mode 100644 index 0000000..36390ec --- /dev/null +++ b/blog/2023-07-06-release-notes-0.18.md @@ -0,0 +1,47 @@ +--- +title: Release Notes - v0.18.0 +description: Release notes for v.0.18.0 +slug: release-notes-v0.18 +authors: + - name: The s3gw team +tags: [release-notes] +hide_table_of_contents: false +--- + +# Release Notes - v0.18.0 + +This release contains numerous fixes for the UI and a refactoring of the object +versioning implementation. + + + +This release is meant for testing and feedback gathering. It is not recommended +for production use. + +Should a bug be found and not expected to be related to the list below, one +should feel encouraged to file an issue in our +[GitHub repository](https://github.com/aquarist-labs/s3gw/issues/new/choose). + +## Features + +- UI: Add a hint to the prefix field in the lifecycle rule dialog (#600) +- UI: Enhance branding support (#572) +- SFS: Implement new versioning design (#378, #472, #547, #526, #524, #519) + +## Fixes + +- UI: Deleting a versioned object is not properly implemented (#550) +- UI: Do not delete object by version (#576) +- UI: Prevent the restoring of the deleted object version (#583) +- UI: Creating an enabled lifecycle rule is not working (#587) +- UI: Disable download button for deleted objects (#595) +- UI: Do not close data table column menu on inside clicks (#599) +- Chart: Update logo and source URLs (#570) +- Chart: Validate email for tls issuer (#596) +- Chart: Fix installation failure when publicDomain is empty (#602) + +## Breaking Changes + +- On-disk format for the metadata store changed + +## Known Issues diff --git a/docusaurus.config.js b/docusaurus.config.js index a91d6b7..af7ada9 100644 --- a/docusaurus.config.js +++ b/docusaurus.config.js @@ -68,6 +68,8 @@ module.exports = { blogTitle: 'Docusaurus blog!', blogDescription: 'A Docusaurus powered blog!', postsPerPage: 'ALL', + blogSidebarCount: 30, + blogSidebarTitle: 'Latest posts', }, theme: { customCss: [require.resolve("./src/css/custom.css")],