Skip to content

Commit

Permalink
deploy: 0877f87
Browse files Browse the repository at this point in the history
  • Loading branch information
gdha committed Jul 5, 2024
1 parent 17a27ad commit 5e1a190
Show file tree
Hide file tree
Showing 11 changed files with 18 additions and 18 deletions.
2 changes: 1 addition & 1 deletion index.html
Original file line number Diff line number Diff line change
Expand Up @@ -193,5 +193,5 @@ <h3 id="support_or_contact">Support or Contact<a class="headerlink" href="#suppo

<!--
MkDocs version : 1.6.0
Build Date UTC : 2024-07-05 14:15:00.447483+00:00
Build Date UTC : 2024-07-05 14:24:06.540331+00:00
-->
2 changes: 1 addition & 1 deletion pi-stories11/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -213,7 +213,7 @@ <h3 id="install_metalllb_layer2_load-balancer">Install metalllb layer2 load-bala
uid: 774b05a7-7ad7-4de3-a9e1-2a636f988ed1
EOD
</code></pre>
<p>According the procedure to <a href="https://metallb.universe.tf/configuration/migration_to_crds/">generate a new CRD resourse file</a> we got:</p>
<p>According the procedure to <a href="https://metallb.universe.tf/configuration/migration_to_crds/">generate a new CRD resource file</a> we got:</p>
<pre><code class="language-bash">$ docker run -d -v $(pwd):/var/input quay.io/metallb/configmaptocrs
Unable to find image 'quay.io/metallb/configmaptocrs:latest' locally
latest: Pulling from metallb/configmaptocrs
Expand Down
2 changes: 1 addition & 1 deletion pi-stories13/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -315,7 +315,7 @@ <h4 id="longhorn_service_monitor">Longhorn service monitor<a class="headerlink"
</code></pre>
<p>Alright, now we can move on the grafana.</p>
<h3 id="install_grafana">Install grafana<a class="headerlink" href="#install_grafana" title="Permanent link">&para;</a></h3>
<p>Our grafana pod will alos use a longhorn device as defined under file:</p>
<p>Our grafana pod will also use a longhorn device as defined under file:</p>
<pre><code class="language-bash"> cat grafana-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
Expand Down
2 changes: 1 addition & 1 deletion pi-stories14/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -260,7 +260,7 @@ <h3 id="installation_of_loki_and_promtail">Installation of loki and promtail<a c
<p><img alt="" src="../img/loki-url.png" /></p>
<p>In our case it is <code>http://10.43.220.29:3100</code>. Thereafter, it is kust a matter of pressing "Save &amp; Test" button.</p>
<p>In the side bar of Grafana click on the "Explore" button and select "Loki" in the upper left corner (of the service to use).</p>
<p>Use Loki's "Log Lables" to select what you want to see - just play with it...</p>
<p>Use Loki's "Log Labels" to select what you want to see - just play with it...</p>
<p><img alt="" src="../img/loki-log-labels.png" /> </p>
<p>Your imagination is the limit with Loki/Grafana, e.g.</p>
<p><img alt="" src="../img/loki.png" /></p>
Expand Down
4 changes: 2 additions & 2 deletions pi-stories15/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -233,7 +233,7 @@ <h3 id="pods_are_stuck_in_termating_status">PODs are stuck in termating status<a
logging loki-stack-promtail-nh2m7 1/1 Running 1 (33m ago) 97d
graphite graphite-0 1/1 Running
</code></pre>
<p>Repeat these steps for the remining pods which are stuck:</p>
<p>Repeat these steps for the remaining pods which are stuck:</p>
<pre><code class="language-bash">gdha@n1:~$ kubectl delete pod grafana-544f695579-g246k --grace-period=0 --force --namespace monitoring
Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod &quot;grafana-544f695579-g246k&quot; force deleted
Expand Down Expand Up @@ -355,7 +355,7 @@ <h3 id="pod_stuck_in_imagepullbackoff_status">POD stuck in ImagePullBackOff stat
NAME READY STATUS RESTARTS AGE
ntopng-6586866d8b-w668c 0/1 ImagePullBackOff 0 69m
</code></pre>
<p>The first thing we think of is a missmatch of our GitHub Container Registry (ghrc) Personal Access Token (PAT). How can we verify that the PAT our kubernetes cluster knows is still the same as the one listed in our <code>~/.ghcr-token</code> file?</p>
<p>The first thing we think of is a mismatch of our GitHub Container Registry (ghrc) Personal Access Token (PAT). How can we verify that the PAT our kubernetes cluster knows is still the same as the one listed in our <code>~/.ghcr-token</code> file?</p>
<p>As we know in which namespace to look (in our case ntopng) we can start digging as follow:</p>
<pre><code class="language-bash">$ kubectl get secret -n ntopng
NAME TYPE DATA AGE
Expand Down
4 changes: 2 additions & 2 deletions pi-stories17/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -126,11 +126,11 @@
<h1 id="pi4_stories">PI4 Stories<a class="headerlink" href="#pi4_stories" title="Permanent link">&para;</a></h1>
<h2 id="raspberry_pi_4_cluster_series_-_deploying_ntopng_with_helm">Raspberry Pi 4 cluster Series - Deploying ntopng with helm<a class="headerlink" href="#raspberry_pi_4_cluster_series_-_deploying_ntopng_with_helm" title="Permanent link">&para;</a></h2>
<h3 id="download_the_github_sources_of_ntopng">Download the GitHub sources of ntopng<a class="headerlink" href="#download_the_github_sources_of_ntopng" title="Permanent link">&para;</a></h3>
<p>We liked the <a href="https://github.com/ntop/ntopng">ntopng application</a> [1] so we thought why not integrate it with our kubernetes cluster. However, the original project did not have the required code to integrate it with our kubernetes cluster, but we did find another <a href="https://github.com/MySocialApp/kubernetes-helm-chart-ntopng.git">project</a> that provides a helm chart for ntopng. Yet another chalenge was building an image for arm64.</p>
<p>We liked the <a href="https://github.com/ntop/ntopng">ntopng application</a> [1] so we thought why not integrate it with our kubernetes cluster. However, the original project did not have the required code to integrate it with our kubernetes cluster, but we did find another <a href="https://github.com/MySocialApp/kubernetes-helm-chart-ntopng.git">project</a> that provides a helm chart for ntopng. Yet another challenge was building an image for arm64.</p>
<p>Therefore, we cloned these 2 projects into our <a href="https://github.com/gdha/pi4-ntopng">pi4-ntopng github project</a> [2].</p>
<p>We have 2 ways to build a pi4_ntopng container. One with the <code>build.sh</code> script which uses the ntopng package which available in ubuntu 20.04 repository (currently version 3.8.190813 or v1.5). The second way is building from scratch (from the sources of https://github.com/ntop/ntopng dev branch) with the script <code>builder.sh</code> which uses the development version (beginning of October 2023 it is version 5.7.0 or v1.9).</p>
<h3 id="build_pi4-ntopng_with_buildsh_script_arm64">Build pi4-ntopng with build.sh script (arm64)<a class="headerlink" href="#build_pi4-ntopng_with_buildsh_script_arm64" title="Permanent link">&para;</a></h3>
<p>We builded the pi4_ntopng with the <code>build.sh</code> script that uses the ntopng executable provided by the operating system used by the container (in our case ubuntu 20). To test the image (version 3.8.190813) with docker (before trying to integrate it within kubernetes) we can do the following:</p>
<p>We built the pi4_ntopng with the <code>build.sh</code> script that uses the ntopng executable provided by the operating system used by the container (in our case ubuntu 20). To test the image (version 3.8.190813) with docker (before trying to integrate it within kubernetes) we can do the following:</p>
<pre><code class="language-bash">$ docker run --net=host -t -p 3000:3000 ghcr.io/gdha/pi4-ntopng:v1.5
WARNING: Published ports are discarded when using host network mode
Starting redis-server: redis-server.
Expand Down
2 changes: 1 addition & 1 deletion pi-stories3/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -439,7 +439,7 @@ <h3 id="k3s_is_up_and_running">k3s is up and running?<a class="headerlink" href=
go version go1.19.4

gdha@n5:~/projects/k3s-ansible$ k3s kubectl cluster-info
ubernetes control plane is running at https://127.0.0.1:6443
kubernetes control plane is running at https://127.0.0.1:6443
CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy

Expand Down
6 changes: 3 additions & 3 deletions pi-stories5/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@
<ul class="current">
<li class="toctree-l2"><a class="reference internal" href="#raspberry_pi_4_cluster_series_-_installing_cert-manager_on_the_k3s_cluster">Raspberry Pi 4 cluster Series - Installing cert-manager on the k3s cluster</a>
<ul>
<li class="toctree-l3"><a class="reference internal" href="#installling_cert-manager">Installling cert-manager</a>
<li class="toctree-l3"><a class="reference internal" href="#installing_cert-manager">Installing cert-manager</a>
</li>
</ul>
</li>
Expand Down Expand Up @@ -124,8 +124,8 @@
<h1 id="pi4_stories">PI4 Stories<a class="headerlink" href="#pi4_stories" title="Permanent link">&para;</a></h1>
<h2 id="raspberry_pi_4_cluster_series_-_installing_cert-manager_on_the_k3s_cluster">Raspberry Pi 4 cluster Series - Installing cert-manager on the k3s cluster<a class="headerlink" href="#raspberry_pi_4_cluster_series_-_installing_cert-manager_on_the_k3s_cluster" title="Permanent link">&para;</a></h2>
<p>As certificates are crucial in a kuberbetes cluster one of the first pods that one should install is <a href="https://cert-manager.io/docs/installation/">cert-manager</a>.</p>
<h3 id="installling_cert-manager">Installling cert-manager<a class="headerlink" href="#installling_cert-manager" title="Permanent link">&para;</a></h3>
<p>Installaion is extremelt easy with the following command:</p>
<h3 id="installing_cert-manager">Installing cert-manager<a class="headerlink" href="#installing_cert-manager" title="Permanent link">&para;</a></h3>
<p>Installation is extremelt easy with the following command:</p>
<pre><code class="language-bash">kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.11.0/cert-manager.yaml
</code></pre>
<p>In the time of writing this article the current version was v1.11.0 - you can change that to the <a href="https://github.com/jetstack/cert-manager/releases">latest release</a> available of course. Here follows an example of the instalaltion of cert-manager:</p>
Expand Down
2 changes: 1 addition & 1 deletion pi-stories6/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@

<h1 id="pi4_stories">PI4 Stories<a class="headerlink" href="#pi4_stories" title="Permanent link">&para;</a></h1>
<h2 id="raspberry_pi_4_cluster_series_-_upgrading_k3s_software_on_your_cluster">Raspberry Pi 4 cluster Series - Upgrading k3s software on your cluster<a class="headerlink" href="#raspberry_pi_4_cluster_series_-_upgrading_k3s_software_on_your_cluster" title="Permanent link">&para;</a></h2>
<p>It is advisable to track the security vulnarabilities published by Rancher Labs around k3s. For example, in November 2020 a critical bug was detected in k3s (see [1]). Therefore, it is quite important to be able to update k3s without interupting the k3s cluster, hence this procedure from Rancher Labs.</p>
<p>It is advisable to track the security vulnerabilities published by Rancher Labs around k3s. For example, in November 2020 a critical bug was detected in k3s (see [1]). Therefore, it is quite important to be able to update k3s without interrupting the k3s cluster, hence this procedure from Rancher Labs.</p>
<pre><code class="language-bash">$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
n3 Ready &lt;none&gt; 117d v1.19.2+k3s1
Expand Down
8 changes: 4 additions & 4 deletions pi-stories7/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@
<ul class="current">
<li class="toctree-l2"><a class="reference internal" href="#raspberry_pi_4_cluster_series_-_yaml_everywhere_-_what_about_correctness">Raspberry Pi 4 cluster Series - YAML everywhere - what about correctness?</a>
<ul>
<li class="toctree-l3"><a class="reference internal" href="#installling_the_go_language_binaries">Installling the Go Language binaries</a>
<li class="toctree-l3"><a class="reference internal" href="#installing_the_go_language_binaries">Installing the Go Language binaries</a>
</li>
<li class="toctree-l3"><a class="reference internal" href="#compiling_the_kubelinter_code">Compiling the KubeLinter code</a>
</li>
Expand Down Expand Up @@ -128,7 +128,7 @@ <h2 id="raspberry_pi_4_cluster_series_-_yaml_everywhere_-_what_about_correctness
<p>When you dive deep into Kubernetes you will notice you cannot go around <a href="https://blog.stackpath.com/yaml/">YAML</a> [1] language. You hate it or love it, however, you better get used to it as it is part of the core of kubernetes.</p>
<p>Writing YAML code from scratch is not a real pleasure, therefore, having a linter would be nice to avoid the low hanging fruit errors. We found a kubernetes linter tool called "<a href="https://github.com/stackrox/kube-linter">KubeLinter</a>" written in the Go language, however, there is no binary available for the Raspberry Pi4 architecture aarch64 on the <a href="https://github.com/stackrox/kube-linter/releases">release page of kube-linter</a>.</p>
<p>Therefore, we decided to build it ourselves from the sources.</p>
<h3 id="installling_the_go_language_binaries">Installling the Go Language binaries<a class="headerlink" href="#installling_the_go_language_binaries" title="Permanent link">&para;</a></h3>
<h3 id="installing_the_go_language_binaries">Installing the Go Language binaries<a class="headerlink" href="#installing_the_go_language_binaries" title="Permanent link">&para;</a></h3>
<p>On our node <em>n1</em> we installed the Go Language with the commands:</p>
<pre><code class="language-bash">$ sudo apt install golang-go
$ sudo apt install make
Expand Down Expand Up @@ -177,13 +177,13 @@ <h3 id="testing_it_out_on_a_real_example">Testing it out on a real example<a cla

Error: found 6 lint errors
</code></pre>
<p>The recommendations are perhaps not really perfect for this example as we do need a writable file system to be able to perform an update and <code>root</code> permissions will be required as well. However, the test itself was successful as it produces a meaningfull output.</p>
<p>The recommendations are perhaps not really perfect for this example as we do need a writable file system to be able to perform an update and <code>root</code> permissions will be required as well. However, the test itself was successful as it produces a meaningful output.</p>
<p>For a more profound usage of <code>kube-linter</code> see the "<a href="https://docs.kubelinter.io/#/">KubeLinter documentation</a>" [4].</p>
<h2 id="references">References<a class="headerlink" href="#references" title="Permanent link">&para;</a></h2>
<p>[1] <a href="https://blog.stackpath.com/yaml/">YAML Ain't Markup Language</a></p>
<p>[2] <a href="https://github.com/gdha/kube-linter">Kube-Linter GitHub fork</a></p>
<p>[3] <a href="https://github.com/gdha/k3s-upgrade-controller/blob/main/system-upgrade-controller.yaml">CRD system-upgrade-controller</a></p>
<p>[4] <a href="https://docs.kubelinter.io/#/">KubeLinter Documenation</a></p>
<p>[4] <a href="https://docs.kubelinter.io/#/">KubeLinter Documentation</a></p>

</div>
</div><footer>
Expand Down
2 changes: 1 addition & 1 deletion search/search_index.json

Large diffs are not rendered by default.

0 comments on commit 5e1a190

Please sign in to comment.