SUSE Linux Enterprise Server deployment with Docker Compose on Azure VMs.
Generate the temporary keys to be used:
mkdir .keys && ssh-keygen -f .keys/temp_rsa
Create the .auto.tfvars
file from the template:
# Choose your distro
cp templates/suse(15|12).auto.tfvars .auto.tfvars
Set the subscription_id
and the allowed_public_ips
variables.
Tip
Check for available updates to packages installed via cloud-init
and update the scripts.
Create the resources:
terraform init
terraform apply -auto-approve
Connect to the virtual machine:
ssh -i .keys/temp_rsa suseadmin@<<PUBLIC-IP>>
Check cloud-init
:
cloud-init status
Create an Artifact Feed of type Universal Packages in an ADO project.
💡 For practical implementation of this project, it is possible to select all members. However, implement minimal privilege in production.
You must give Contributor
permissions for the pipeline to publish packages. Check the Pipelines permissions sections for more information.
Now create a pipeline on ADO using azure-pipeline.yaml as a template. Add the variables projectName
and feedName
accordingly.
Run the pipeline and confirm that the artifact has been generated.
Add the VM System-Assigned identity to Azure DevOps.
When logged into the VM, login with the VM Managed Identity:
az login --identity --allow-no-subscriptions
The Azure DevOps Extension for the CLI is already installed via userdata
.
It is necessary to run additional commands to allow a Managed Identity to connect to Azure DevOps. Follow the documentation to implemented that.
Configuration will be performed with the Azure CLI DevOps extension.
Preferably for this operation, use the interactive Azure CLI login:
az login
Optionally, this can also be set:
az devops configure --defaults organization=<your-org-url> project=<your-project-name>
It is possible to connect from the VM to ADO using Managed Identities with a connected tenant.
To login with such identity, use a variation of the az login
command:
az login --identity --allow-no-subscriptions
Using upstream sources it is possible to store packages from various sources in a single feed.
Follow the procure on how to set up upstream sources for this configuration.
Tip
If you don't need all of the upstream sources, remove them from the feed.
The requirements for this approach (using Maven) are:
- Java
- Maven
- Personal Access Token (PAT) with minimal permissions
- Maven
settings.xml
setup
The Maven settings.xml
configuration should look like this:
<servers>
<server>
<id>azure-devops-feed-id</id>
<username>anything</username>
<password>YOUR_PERSONAL_ACCESS_TOKEN</password>
</server>
</servers>
Set the -DrepoUrl
value, and run the command:
mvn dependency:get \
-DrepoUrl=<repository-url> \
-Dartifact="com.microsoft.sqlserver:mssql-jdbc:12.8.1.jre11" \
The downloaded JAR will be available at the ~/.m2/repository
location.
To enable containers with advanced features, such as service endpoints, you need the CNI.
More information on how to deploy the plugin and the project on GitHub.
Following tutorial 1 and tutorial 2, install Nginx.
Note
This was tested on SUSE 12 only
Prepare the installation:
sudo zypper addrepo -G -t yum -c 'http://nginx.org/packages/sles/12' nginx
wget http://nginx.org/keys/nginx_signing.key
sudo rpm --import nginx_signing.key
Install Nginx:
sudo zypper install nginx
Commands to control Nginx:
sudo systemctl start nginx
sudo systemctl restart nginx
sudo systemctl stop nginx
sudo systemctl status nginx
Instead of enabling the service directly, let's configure a crontab
.
Create a file named /opt/start-nginx.sh
:
echo "Starting NGINX"
sudo systemctl start nginx
echo "Completed starting NGINX"
Add the required permissions:
chmod +x /opt/start-nginx.sh
Edit the crontab
:
crontab -e
Set the script path:
@reboot /opt/start-nginx.sh
Crontab logs can be view with the journal:
journalctl --no-hostname --output=short-precise | grep -i cron
Immediatelly using dig
to resove the storage IP address should return a public IP granted by Private Link integration.
dig stsuse82913.blob.core.windows.net
It is also expected to resolve to the public IP using an external DNS.
dig @8.8.8.8 stsuse82913.blob.core.windows.net
Copy the getblob.sh
template file:
cp templates/getblob.sh getblob.sh
Edit the storage_account
and access_key
variables.
Test the script:
bash getblob.sh
To force curl
through a proxy, use the -x
command:
Tip
Once the proxy is set in Linux, curl
will pickup the configuration automatically. To force no proxy, use the command -noproxy
.
-x "http://43.153.208.148:3128"
Create a proxy for testing, or use a free option.
Caution
If using a free proxy, do not use real credentials while testing.
Proxy configuration can be global or single user (SUSE documentation).
For global /etc/sysconfig/proxy
:
Important
For NO_PROXY
, the wildcard character is .
.
PROXY_ENABLED="yes"
HTTP_PROXY="http://43.153.208.148:3128"
HTTPS_PROXY="http://43.153.208.148:3128"
NO_PROXY="localhost, 127.0.0.1, .blob.core.windows.net"
For single user, such as in .bashrc
:
export http_proxy="http://43.153.208.148:3128"
export https_proxy="http://43.153.208.148:3128"
export no_proxy="localhost, 127.0.0.1, .blob.core.windows.net"
When using private connections or trusted services, proxy exceptions can configured.
These are typically defined in "no proxy" configuration values.
For example, Microsoft Azure services connected via Private Link, such as *.blob.core.windows.net
and .azurecr.io
.
When using docker, consider the AllowList. Example: hub.docker.com
, registry-1.docker.io
, and production.cloudflare.docker.com
.
Configuration can be done for the CLI and for the daemon.
As it is stated in the documentation, proxy-related environment variables are automatically copied:
When you start a container, its proxy-related environment variables are set to reflect your proxy configuration in
~/.docker/config.json
This could have unintended consequences when using wildwards.
Important
In the Docker configuration, the wildcard character is *
. This can break the Linux proxy as it does not support wildcard with *
, only starting with .
will work.
For the CLI on file ~/.docker/config.json
:
{
"proxies": {
"default": {
"httpProxy": "http://43.153.208.148:3128",
"httpsProxy": "http://43.153.208.148:3128",
"noProxy": "127.0.0.0/8,*.blob.core.windows.net,*.docker.com,*.docker.io,*.cloudflare.docker.com"
}
}
}
For the daemon on file daemon.json
, of which the location can vary:
{
"proxies": {
"http-proxy": "http://43.153.208.148:3128",
"https-proxy": "http://43.153.208.148:3128",
"no-proxy": "127.0.0.0/8,*.blob.core.windows.net,*.docker.com,*.docker.io,*.cloudflare.docker.com"
}
}
After changing the configuration file, restart the daemon:
sudo systemctl restart docker
You'll need to log in to Docker Hub.
Important
Prefer using a PAT for testing, and delete it later. Or use a custom proxy.
docker login -u <username>
Pull the image for testing:
docker pull ubuntu
Connect iteratively to the container:
# Run it
docker run -i -t ubuntu bash
# If needed, reconnect
docker start containername
docker attach containername
Install the required tools:
apt update && apt install -y dnsutils vim nano curl openssl
Test again using the getblosh.sh script template.
terraform destroy -auto-approve