Skip to content

Commit

Permalink
Update all links
Browse files Browse the repository at this point in the history
  • Loading branch information
majst01 committed Oct 18, 2023
1 parent 69f5639 commit 5b7287f
Show file tree
Hide file tree
Showing 8 changed files with 20 additions and 20 deletions.
4 changes: 2 additions & 2 deletions docs/src/development/contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,15 +91,15 @@ Development follows the official guide to:
- Write clear, idiomatic Go code[^2]
- Learn from mistakes that must not be repeated[^3]
- Apply appropriate names to your artifacts:
- [https://talks.golang.org/2014/names.slide#1](https://talks.golang.org/2014/names.slide#1)
- [https://talks.golang.org/2014/names.slide](https://talks.golang.org/2014/names.slide)
- [https://go.dev/blog/package-names](https://go.dev/blog/package-names)
- [https://go.dev/doc/effective_go#names](https://go.dev/doc/effective_go#names)
- Enable others to understand the reasoning of non-trivial code sequences by applying a meaningful documentation.

#### Development Decisions

- **Dependency Management** by using Go modules
- **Build and Test Automation** by using [GNU Make](https://linux.die.net/man/1/make).
- **Build and Test Automation** by using [GNU Make](https://man7.org/linux/man-pages/man1/make.1p.html).
- **End-user APIs** should consider using go-swagger and [Go-Restful](https://github.com/emicklei/go-restful)
**Technical APIs** should consider using [grpc](https://grpc.io/)

Expand Down
2 changes: 1 addition & 1 deletion docs/src/development/proposals/MEP1/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ In order to replicate certain data which must be available across all partitions
Postgres does not have a multi datacenter with replication in both directions, it just can make the remote instance store the same data.
- CockroachDB

Is a Postgresql compatible database enginge on the wire. CockroachDB gives you both, ACID and geo replication with writes allowed from all connected members. It is even possible to configure [Follow the Workload](https://www.cockroachlabs.com/docs/stable/topology-follow-the-workload.html) and [Geo Partitioning and Replication](https://www.cockroachlabs.com/docs/v19.2/topology-geo-partitioned-replicas.html#main-content).
Is a Postgresql compatible database enginge on the wire. CockroachDB gives you both, ACID and geo replication with writes allowed from all connected members. It is even possible to configure [Follow the Workload](https://www.cockroachlabs.com/docs/stable/topology-follow-the-workload) and [Geo Partitioning and Replication](https://www.cockroachlabs.com/docs/v19.2/topology-geo-partitioned-replicas).

If we migrate all metal-api entities to be stored the same way we store masterdata, we could use cockroachdb to store all metal entities in one ore more databases spread across all partitions and still ensure consistency and high availability.

Expand Down
20 changes: 10 additions & 10 deletions docs/src/development/proposals/MEP10/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,9 @@ version that supports Broadcom ASICs. Since trashing the existing hardware is no
different network operating system is necessary.

One of the remaining big players is [SONiC](https://sonic-net.github.io/SONiC/), which Microsoft created to scale the
network of Azure. It's an open-source project and is now part of the [Linux Foundation](https://www.linuxfoundation.org/press-release/software-for-open-networking-in-the-cloud-sonic-moves-to-the-linux-foundation/).
network of Azure. It's an open-source project and is now part of the [Linux Foundation](https://www.linuxfoundation.org/press/press-release/software-for-open-networking-in-the-cloud-sonic-moves-to-the-linux-foundation).

For a general introduction to SONiC, please follow the (https://github.com/sonic-net/SONiC/wiki/Architecture) official
For a general introduction to SONiC, please follow the [Architecture](https://github.com/sonic-net/SONiC/wiki/Architecture) official
documentation.

## ConfigDB
Expand All @@ -31,17 +31,17 @@ elif [ "$CONFIG_TYPE" == "split" ]; then
rm -f /etc/frr/frr.conf
```

Reference: https://github.com/Azure/sonic-buildimage/blob/202205/dockers/docker-fpm-frr/docker_init.sh#L69
Reference: [docker-init](https://github.com/sonic-net/sonic-buildimage/blob/202205/dockers/docker-fpm-frr/docker_init.sh#L69)

Adding support for the integrated configuration mode, we must at least adjust the startup shell script and the supervisor configuration:

```
```bash
{% if DEVICE_METADATA.localhost.docker_routing_config_mode is defined and DEVICE_METADATA.localhost.docker_routing_config_mode == "unified" %}
[program:vtysh_b]
command=/usr/bin/vtysh -b
```
Reference: https://github.com/Azure/sonic-buildimage/blob/202|205/dockers/docker-fpm-frr/frr/supervisord/supervisord.conf.j2#L157
Reference: [supervisord.conf](https://github.com/sonic-net/sonic-buildimage/blob/202205/dockers/docker-fpm-frr/frr/supervisord/supervisord.conf.j2#L157)
## Non-BGP Configuration
Expand All @@ -53,15 +53,15 @@ For the Non-BGP configuration we have to write it into the Redis database direct
Directly writing into the Redis database isn't a stable interface, and we must determine the create, delete, and update
operations on our own. The last point is also valid for the Mgmt Framework and the SONiC restapi. Furthermore, the
Mgmt Framework doesn't start anymore for several months, and a [potential fix](https://github.com/Azure/sonic-buildimage/pull/10893)
Mgmt Framework doesn't start anymore for several months, and a [potential fix](https://github.com/sonic-net/sonic-buildimage/pull/10893)
is still not merged. And the SONiC restapi isn't enabled by default, and we must build and maintain our own SONiC images.
Using `config replace` would reduce the complexity in the `metal-core` codebase because we don't have to determine the
actual changes between the running and the desired configuration. The approach's drawbacks are using a version of SONiC
that contains the PR [Yang support for VXLAN](https://github.com/Azure/sonic-buildimage/pull/7294), and we must provide
that contains the PR [Yang support for VXLAN](https://github.com/sonic-net/sonic-buildimage/pull/7294), and we must provide
the whole new startup configuration to prevent unwanted deconfiguration.
#### Configure Loopback interface and activate VXLAN
### Configure Loopback interface and activate VXLAN
```json
{
Expand Down Expand Up @@ -167,7 +167,7 @@ if not port_alias:
lldpcli_cmd = "lldpcli configure ports {0} lldp portidsubtype local {1}".format(port_name, port_alias)
```
Reference: https://github.com/Azure/sonic-buildimage/blob/202205/dockers/docker-lldp/lldpmgrd#L153
Reference: [lldpmgr](https://github.com/sonic-net/sonic-buildimage/blob/202205/dockers/docker-lldp/lldpmgrd#L153)
## Mgmt Interface
Expand All @@ -188,4 +188,4 @@ The mgmt interface is `eth0`. To configure a static IP address and activate the
}
```
[IP forwarding is deactivated on `eth0`](https://github.com/Azure/sonic-buildimage/blob/202205/files/image_config/sysctl/sysctl-net.conf#L7), and no IP Masquerade is configured.
[IP forwarding is deactivated on `eth0`](https://github.com/sonic-net/sonic-buildimage/blob/202205/files/image_config/sysctl/sysctl-net.conf#L7), and no IP Masquerade is configured.
4 changes: 2 additions & 2 deletions docs/src/development/proposals/MEP11/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,9 +42,9 @@ The metal-api will only write to the current index and switches to the new index

As Meilisearch will be filled with data over time, we want to move completed chunks to a S3 compatible storage. This will be done by a sidecar cronjob that is executed periodically. Note that the periods of the index rotation and the cronjob execution don't have to match.

When the backup process gets started, it initiates a [Meilisearch dump](https://docs.meilisearch.com/learn/advanced/dumps.html) of the whole database across all indices. Once the returned task is finished, the dump must be copied from a Meilisearch volume to the S3 compatible storage. After a successful copy, the dump can be deleted.
When the backup process gets started, it initiates a [Meilisearch dump](https://docs.meilisearch.com/learn/advanced/dumps) of the whole database across all indices. Once the returned task is finished, the dump must be copied from a Meilisearch volume to the S3 compatible storage. After a successful copy, the dump can be deleted.

Now we want to remove all indices from Meilisearch, except the most recent one. For this, we [get all indices](https://docs.meilisearch.com/reference/api/indexes.html#list-all-indexes), sort them and [delete each index](https://docs.meilisearch.com/reference/api/indexes.html#delete-an-index) except the most recent one to avoid data loss.
Now we want to remove all indices from Meilisearch, except the most recent one. For this, we [get all indices](https://docs.meilisearch.com/reference/api/indexes), sort them and [delete each index](https://docs.meilisearch.com/reference/api/indexes) except the most recent one to avoid data loss.

For the actual implementation, we can build upon [backup-restore-sidecar](https://github.com/metal-stack/backup-restore-sidecar). But due to the index rotation and the fact, that older indices need to be deleted, this probably does not fit into the mentioned sidecar.

Expand Down
2 changes: 1 addition & 1 deletion docs/src/external/firewall-controller/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -193,7 +193,7 @@ curl node-exporter.firewall.svc.cluster.local:9100/metrics

## Firewall Logs

It is also possible to tail for the dropped packets with the following command (install stern from [stern](https://github.com/wercker/stern) ):
It is also possible to tail for the dropped packets with the following command (install stern from [stern](https://github.com/stern/stern) ):

```bash
stern -n firewall drop
Expand Down
2 changes: 1 addition & 1 deletion docs/src/external/mini-lab/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ The mini-lab is a small, virtual setup to locally run the metal-stack. It deploy
- [docker](https://www.docker.com/) >= 18.09 (for using kind and our deployment base image)
- [docker-compose](https://docs.docker.com/compose/) >= 2.0 (for ease of use and for parallelizing control plane and partition deployment)
- [kind](https://github.com/kubernetes-sigs/kind/releases) == v0.15.0 (for hosting the metal control plane on a kubernetes cluster v1.25)
- [containerlab](https://containerlab.srlinux.dev/install/) == v0.25.1
- [containerlab](https://containerlab.dev/) == v0.25.1
- the lab creates a docker network on your host machine (`172.17.0.1`), this hopefully does not overlap with other networks you have
- (recommended) haveged to have enough random entropy (only needed if the PXE process does not work)

Expand Down
4 changes: 2 additions & 2 deletions docs/src/installation/deployment.md
Original file line number Diff line number Diff line change
Expand Up @@ -453,7 +453,7 @@ metal_api_grpc_certs_ca_cert: "{{ lookup('file', 'certs/ca.pem') }}"

!!! tip

For the actual communication between the metal-api and the user clients (REST API, runs over the ingress-controller you deployed before), you can simply deploy a tool like [cert-manager](https://github.com/jetstack/cert-manager) into your Kubernetes cluster, which will automatically provide your ingress domains with Let's Encrypt certificates.
For the actual communication between the metal-api and the user clients (REST API, runs over the ingress-controller you deployed before), you can simply deploy a tool like [cert-manager](https://github.com/cert-manager/cert-manager) into your Kubernetes cluster, which will automatically provide your ingress domains with Let's Encrypt certificates.

### Running the Deployment

Expand Down Expand Up @@ -555,7 +555,7 @@ Checkout the [role documentation](https://github.com/metal-stack/metal-roles/tre

metal-stack currently supports two authentication methods:

- [dex](https://github.com/dexidp/dex) for providing user authentication through [OpenID Connect](https://openid.net/connect/) (OIDC)
- [dex](https://github.com/dexidp/dex) for providing user authentication through [OpenID Connect](https://openid.net/developers/how-connect-works/) (OIDC)
- [HMAC](https://en.wikipedia.org/wiki/HMAC) auth, typically used for access by technical users (because we do not have service account tokens at the time being)

In the metal-api, we have three different user roles for authorization:
Expand Down
2 changes: 1 addition & 1 deletion docs/src/overview/networking.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ It is not only important to have a scalable and resilient infrastructure but als

### BGP

For routing the **Border Gateway Protocol (BGP)**, more specific: External BGP was selected. Extensive testing and operational experiences have shown that External BGP is well suited as a stand-alone routing protocol (see: [RFC7938](https://tools.ietf.org/html/rfc7938)).
For routing the **Border Gateway Protocol (BGP)**, more specific: External BGP was selected. Extensive testing and operational experiences have shown that External BGP is well suited as a stand-alone routing protocol (see: [RFC7938](https://datatracker.ietf.org/doc/html/rfc7938)).

Not all tenant servers are connected to the same leaf. Instead they can be distributed among any of the leaves of the data center. To not let this detail restrict the intra-tenant communication it is required to interconnect those layer-2 domains. In the context of BGP there is a concept of overlay networking with VXLAN/ EVPN that was evaluated to satisfy the needs of the metal-stack.

Expand Down

0 comments on commit 5b7287f

Please sign in to comment.