Welcome to qubesome! This project is a command-line interface (CLI) tool aimed to simplify managing Linux desktop configurations. It works by virtualizing both the Window Manager and the workloads based on a declarative state from a git repository.
How can this be useful?
- Test-drive Window Manager configurations without having to commit to them or needing to impact your existing setup.
- Version control your window manager and workloads.
- Bump configuration and software versions via PRs - and roll them back in the same way.
- Provide isolation across profiles and workloads (clipboard, network, storage, etc).
go install github.com/qubesome/cli@latest
zypper install qubesome
Start one of the two profiles in the sample-dotfiles (i3
or awesome
) repo:
qubesome start -git https://github.com/qubesome/sample-dotfiles <PROFILE>
NOTE: Press
Ctrl
+Shift
to key and mouse grab in and out of the qubesome profile.
NOTE 2: Each profile has a different
display
set in qubesome.config, therefore their clipboards are isolated between themselves and the host. To transfer clipboards between profiles usequbesome clipboard
.
Check whether dependency requirements are met:
qubesome deps show
Use a local copy, and if not found fallback to a fresh clone:
qubesome start -git https://github.com/qubesome/sample-dotfiles -local <local_git_path> <profile>
Copy clipboard from the host to the i3 profile:
qubesome clipboard --from-host i3
qubesome start
: Start a qubesome environment for a given profile.qubesome run
: Run qubesome workloads.qubesome clipboard
: Manage the images within your workloads.qubesome images
: Manage the images within your workloads.qubesome xdg
: Handle xdg-open based via qubesome.
For more information on each command, run qubesome <command> --help
.
Qubesome requires docker
and xrandr
installed on a machine
running Xorg. To install them using zypper:
sudo zypper install -y docker xrandr
To enable GPU workloads (e.g. Google meet with background filters), install NVidia's container-toolkit.
First install the NVIDIA drivers. For Tumbleweed users:
zypper install openSUSE-repos-Tumbleweed-NVIDIA
zypper install-new-recommends --repo repo-non-free
Followed up by installing and configuring the nvidia container toolkit:
zypper modifyrepo --enable nvidia-container-toolkit-experimental
zypper ar https://nvidia.github.io/libnvidia-container/stable/rpm/nvidia-container-toolkit.repo
nvidia-ctk runtime configure --runtime=docker
systemctl restart docker
This largely depends on the configuration, but overall the main supported runner is based on docker, which comes the limitations of container-level isolation. But here are a few highlights:
Each Qubesome profile can be executed on its own Xorg display, which translates into Clipboard isolation across workloads across different profiles.
Each profile can define host access (e.g. device, network, dbus) allowed for its workloads. For example, having a Work profile and a Personal profile, it is possible to limit what parts of the disk (or external storage) can be mounted to each.
Ability to control network/internet access for each workload, and run the window manager without internet access. Auditing access violations, for visibility of when workloads are trying to access things they should not.
Not at this point, potentially this could be introduced in the future.
Some Linux distros (e.g. Tumbleweed) have X11 access controls enabled by default. Just the local current user will need to be granted access for qubesome to work:
xhost +SI:localuser:${USER}
If xhost
is not present, it can be installed with sudo zypper install xhost
.
This project is licensed under the Apache 2.0 License. Refer to the LICENSE file for more information.