The purpose of these images is to provide a full featured web native Linux desktop experience for any Linux application or desktop environment. These images replace our old base images at Rdesktop Web for greatly increased performance, fidelity, and feature set. They ship with passwordless sudo to allow easy package installation, testing, and customization. By default they have no logic to mount out anything but the users home directory, meaning on image updates anything outside of /config
will be lost.
These images contain the following services:
- KasmVNC - The core technology for interacting with a containerized desktop from a web browser.
- Kclient - NodeJS Iframe wrapper for KasmVNC providing audio and file access.
- NGINX - Used to serve the mix of KasmVNC and Kclient with the appropriate headers and provide basic auth.
- Docker - Can be used for interacting with a mounted in Docker socket or if the container is run in privileged mode will start a DinD setup.
- PulseAudio - Sound subsystem used to capture audio from the active desktop session and send it to the browser via the Kclient helper application.
Authentication for these containers is included as a convenience and to keep in sync with the previous xrdp containers they replace. We use bash to substitute in settings user/password and some strings might break that. In general this authentication mechanism should be used to keep the kids out not the internet
If you are looking for a robust secure application gateway please check out SWAG.
All application settings are passed via environment variables:
Variable | Description |
---|---|
CUSTOM_PORT | Internal port the container listens on for http if it needs to be swapped from the default 3000. |
CUSTOM_HTTPS_PORT | Internal port the container listens on for https if it needs to be swapped from the default 3001. |
CUSTOM_USER | HTTP Basic auth username, abc is default. |
PASSWORD | HTTP Basic auth password, abc is default. If unset there will be no auth |
SUBFOLDER | Subfolder for the application if running a subfolder reverse proxy, need both slashes IE /subfolder/ |
TITLE | The page title displayed on the web browser, default "KasmVNC Client". |
FM_HOME | This is the home directory (landing) for the file manager, default "/config". |
START_DOCKER | If set to false a container with privilege will not automatically start the DinD Docker setup. |
DRINODE | If mounting in /dev/dri for DRI3 GPU Acceleration allows you to specify the device to use |
DISABLE_IPV6 | If set to true or any value this will disable IPv6 |
All base images are built for x86_64 and aarch64 platforms.
Distro | Current Tag |
---|---|
Alpine | alpine317 |
Arch | arch |
Debian | debianbullseye |
Debian | debianbookworm |
Fedora | fedora38 |
Ubuntu | ubuntujammy |
Included in these base images is a simple Openbox DE and the accompanying logic needed to launch a single application. Lets look at the bare minimum needed to create an application container starting with a Dockerfile:
FROM ghcr.io/linuxserver/baseimage-kasmvnc:alpine318
RUN apk add --no-cache firefox
COPY /root /
And we can define the application to start using:
mkdir -p root/defaults
echo "firefox" > root/defaults/autostart
Resulting in a folder that looks like this:
├── Dockerfile
└── root
└── defaults
└── autostart
Now build and test:
docker build -t firefox .
docker run --rm -it -p 3000:3000 firefox bash
On http://localhost:3000 you should be presented with a Firefox web browser interface.
This similar setup can be used to embed any Linux Desktop application in a web accesible container.
If building images it is important to note that many application will not work inside of Docker without --security-opt seccomp=unconfined
, they may have launch flags to not use syscalls blocked by Docker like with chromium based applications and --no-sandbox
. In general do not expect every application will simply work like a native Linux installation without some modifications
Also included in the init logic is the ability to define application launchers. As the user has the ability to close the application or if they want to open multiple instances of it this can be useful. Here is an example of a menu definition file for Firefox:
<?xml version="1.0" encoding="utf-8"?>
<openbox_menu xmlns="http://openbox.org/3.4/menu">
<menu id="root-menu" label="MENU">
<item label="xterm" icon="/usr/share/pixmaps/xterm-color_48x48.xpm"><action name="Execute"><command>/usr/bin/xterm</command></action></item>
<item label="FireFox" icon="/usr/share/icons/hicolor/48x48/apps/firefox.png"><action name="Execute"><command>/usr/bin/firefox</command></action></item>
</menu>
</openbox_menu>
Simply create this file and add it to your defaults folder as menu.xml
:
├── Dockerfile
└── root
└── defaults
└── autostart
└── menu.xml
This allows users to right click the desktop background to launch the application.
When building an application container we are leveraging the Openbox DE to handle window management, but it is also possible to completely replace the DE that is launched on container init using the startwm.sh
script, located again in defaults:
├── Dockerfile
└── root
└── defaults
└── startwm.sh
If included in the build logic it will be launched in place of Openbox. Examples for this kind of configuration can be found in our Webtop repository
Included in these base images are binary blobs /kasmbins
and a special init process /kasminit
to maintain compatibility with Kasm Workspaces, If using this base image as reccomended with the startwm.sh
or autostart
entrypoints. They will be able to be used on that platform without issue.
These base images include an installation of Docker that can be used in two ways. The simple method is simply leveraging the Docker/Docker Compose cli bins to manage the host level Docker installation by mounting in -v /var/run/docker.sock:/var/run/docker.sock
.
The base images can also run an isolated in container DinD setup simply by passing --privileged
to the container when launching. If for any reason the application needs privilege but Docker is not wanted the -e START_DOCKER=false
can be set at runtime or in the Dockerfile.
In container Docker (DinD) will most likely use the fuse-overlayfs driver for storage which is not as fast as native overlay2. To increase perormance the /var/lib/docker/
directory in the container can be mounted out to a Linux host and will use overlay2. Keep in mind Docker runs as root and the contents of this directory will not respect the PUID/PGID environment variables available on all LinuxServer.io containers.
For accelerated apps or games, render devices can be mounted into the container and leveraged by applications using:
--device /dev/dri:/dev/dri
This feature only supports Open Source GPU drivers:
Driver | Description |
---|---|
Intel | i965 and i915 drivers for Intel iGPU chipsets |
AMD | AMDGPU, Radeon, and ATI drivers for AMD dedicated or APU chipsets |
NVIDIA | nouveau2 drivers only, closed source NVIDIA drivers lack DRI3 support |
The DRINODE
environment variable can be used to point to a specific GPU.
Up to date information can be found here
When using this image in tandem with a supported video card, compositing will function albeit with a performance hit when syncing the frames with pixmaps for the applications using it. This can greatly increase app compatibility if the application in question requires compositing, but requires a real GPU to be mounted into the container. By default we disable compositing at a DE level for performance reasons on our downstream images, but it can be enabled by the user and programs using compositing will still function even if the DE has it disabled in its settings. When building desktop images be sure you understand that with it enabled by default only users that have a compatible GPU mounted in will be able to use your image.
These images support all the native KasmVNC encoding methods including a true 24 bit RGB lossless mode using the Quite OK Image Format. This mode will use all the bandwidth you give it so just keep that in mind for remote sessions. This mode also might require special configuration depending on how you are accessing the container. Lossless will only work over http (default port 3000) on localhost, when accessing remotely or even over a local network you need to use https (default port 3001) to support SharedArrayBuffer. This is needed to leverage a fast memory pipeline in the browser during the threaded WebAssembly based decoding. This can be enabled in the sidebar under settings>stream quality>lossless.
If putting this container behind a proxy of some kind some headers will need to be set to again support SharedArrayBuffers here is a default NGINX configuration format:
add_header 'Cross-Origin-Embedder-Policy' 'require-corp';
add_header 'Cross-Origin-Opener-Policy' 'same-origin';
add_header 'Cross-Origin-Resource-Policy' 'same-site';
More information here
The following line is only in this repo for loop testing:
- { date: "01.01.50:", desc: "I am the release message for this internal repo." }