Skip to content

A minimal Linux that runs as a coreboot or LinuxBoot ROM payload to provide a secure, flexible boot environment for laptops, workstations and servers.

License

Notifications You must be signed in to change notification settings

linuxboot/heads

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Heads booting on an x230

Heads: the other side of TAILS

Heads is a configuration for laptops and servers that tries to bring more security to commodity hardware. Among its goals are:

  • Use free software on the boot path
  • Move the root of trust into hardware (or at least the ROM bootblock)
  • Measure and attest to the state of the firmware
  • Measure and verify all filesystems

Flashing Heads into the boot ROM

NOTE: It is a work in progress and not yet ready for non-technical users. If you're interested in contributing, please get in touch. Installation requires disassembly of your laptop or server, external SPI flash programmers, possible risk of destruction and significant frustration.

More information is available in the 33C3 presentation of building "Slightly more secure systems".

Documentation

Please refer to Heads-wiki for your Heads' documentation needs.

Contributing

We welcome contributions to the Heads project! Before contributing, please read our Contributing Guidelines for information on how to get started, submit issues, and propose changes.

Building heads with prebuilt and versioned docker images

Heads now builds with Nix built docker images since #1661.

The short path to build Heads is to do what CircleCI would do (./docker_repro.sh under heads git cloned directory):

  • Install docker-ce for your OS of choice (refer to their documentation)
  • run ./docker_repro.sh make BOARD=XYZ

Using Nix local dev environement / building docker images with Nix

Under QubesOS?

Build docker from nix develop layer locally

Set up Nix and flakes

  • If you don't already have Nix, install it:
    • [ -d /nix ] || sh <(curl -L https://nixos.org/nix/install) --no-daemon
    • . /home/user/.nix-profile/etc/profile.d/nix.sh
  • Enable flake support in nix
    • mkdir -p ~/.config/nix
    • echo 'experimental-features = nix-command flakes' >>~/.config/nix/nix.conf

Build image

  • Have docker and Nix installed

  • Build nix developer local environment with flakes locked to specified versions

    • ./docker_local_dev.sh

On some hardened OSes, you may encounter problems with ptrace.

       > proot error: ptrace(TRACEME): Operation not permitted

The most likely reason is that your kernel.yama.ptrace_scope variable is too high and doesn't allow docker+nix to run properly. You'll need to set kernel.yama.ptrace_scope to 1 while you build the heads binary.

sudo sysctl kernel.yama.ptrace_scope #show you the actual value, probably 2 or 3
sudo sysctl -w kernel.yama.ptrace_scope=1 #setup the value to let nix+docker run properly

(don't forget to put back the value you had after finishing build head)

Done!

Your local docker image "linuxboot/heads:dev-env" is ready to use, reproducible for the specific Heads commit used to build it, and will produce ROMs reproducible for that Heads commit ID.

Jump into nix develop created docker image for interactive workflow

There is 3 helpers:

  • ./docker_local_dev.sh: for developers wanting to customize docker image built from flake.nix(nix devenv creation) and flake.lock (pinned versions used by flake.nix)
  • ./docker_latest.sh: for Heads developers, wanting to use latest published docker images to develop Heads
  • ./docker_repro.sh: versioned docker image used under CircleCI to produce reproducivle builds, both locally and under CircleCI. Use this one if in doubt

ie: ./docker_repro.sh will jump into CircleCI used versioned docker image for that Heads commit id to build images reproducibly if git repo is clean (not dirty).

From there you can use the docker image interactively.

make BOARD=board_name where board_name is the name of the board directory under ./boards directory.

One such useful example is to build and test qemu board roms and test them through qemu/kvm/swtpm provided in the docker image. Please refer to qemu documentation for more information.

Eg:

./docker_repro.sh make BOARD=qemu-coreboot-fbwhiptail-tpm2 # Build rom, export public key to emulated usb storage from qemu runtime
./docker_repro.sh make BOARD=qemu-coreboot-fbwhiptail-tpm2 PUBKEY_ASC=~/pubkey.asc inject_gpg # Inject pubkey into rom image
./docker_repro.sh make BOARD=qemu-coreboot-fbwhiptail-tpm2 USB_TOKEN=Nitrokey3NFC PUBKEY_ASC=~/pubkey.asc ROOT_DISK_IMG=~/qemu-disks/debian-9.cow2 INSTALL_IMG=~/Downloads/debian-9.13.0-amd64-xfce-CD-1.iso run # Install

Alternatively, you can use locally built docker image to build a board ROM image in a single call but do not expect reproducible builds if not using versioned docker images as per CircleCI as per usage of ./docker_repro.sh

Eg: ./docker_local_dev.sh make BOARD=nitropad-nv41

Pull docker hub image to prepare reproducible ROMs as CircleCI in one call

./docker_repro.sh make BOARD=x230-hotp-maximized
./docker_repro.sh make BOARD=nitropad-nv41

Maintenance notes on docker image

Redo the steps above in case the flake.nix or nix.lock changes. Commit changes. Then publish on docker hub:

#put relevant things in variables:
docker_version="vx.y.z" && docker_hub_repo="tlaurion/heads-dev-env"
#update pinned packages to latest available ones if needed, modify flake.nix derivatives if needed:
nix flakes update
#modify CircleCI image to use newly pushed docker image
sed "s@\(image: \)\(.*\):\(v[0-9]*\.[0-9]*\.[0-9]*\)@\1\2:$docker_version@" -i .circleci/config.yml
# commit changes
git commit --signoff -m "Bump nix develop based docker image to $docker_hub_repo:$docker_version"
#use commited flake.nix and flake.lock in nix develop
nix --print-build-logs --verbose develop --ignore-environment --command true
#build new docker image from nix develop environement
nix --print-build-logs --verbose build .#dockerImage && docker load < result
#tag produced docker image with new version
docker tag linuxboot/heads:dev-env "$docker_hub_repo:$docker_version"
#push newly created docker image to docker hub
docker push "$docker_hub_repo:$docker_version"
#test with CircleCI in PR. Merge.
git push ...
#make last tested docker image version the latest
docker tag "$docker_hub_repo:$docker_version" "$docker_hub_repo:latest"
docker push "$docker_hub_repo:latest"

This can be put in reproducible oneliners to ease maintainership.

Test image in dirty mode:

docker_version="vx.y.z" && docker_hub_repo="tlaurion/heads-dev-env" && sed "s@\(image: \)\(.*\):\(v[0-9]*\.[0-9]*\.[0-9]*\)@\1\2:$docker_version@" -i .circleci/config.yml && nix --print-build-logs --verbose develop --ignore-environment --command true && nix --print-build-logs --verbose build .#dockerImage && docker load < result && docker tag linuxboot/heads:dev-env "$docker_hub_repo:$docker_version" && docker push "$docker_hub_repo:$docker_version"

Notes:

  • Local builds can use ":latest" tag, which will use latest tested successful CircleCI run
  • To reproduce CirlceCI results, make sure to use the same versioned tag declared under .circleci/config.yml's "image:"

General notes on reproducible builds

In order to build reproducible firmware images, Heads builds a specific version of gcc and uses it to compile the Linux kernel and various tools that go into the initrd. Unfortunately this means the first step is a little slow since it will clone the musl-cross-make tree and build gcc...

Once that is done, the top level Makefile will handle most of the remaining details -- it downloads the various packages, verifies the hashes, applies Heads specific patches, configures and builds them with the cross compiler, and then copies the necessary parts into the initrd directory.

There are still dependencies on the build system's coreutils in /bin and /usr/bin/, but any problems should be detectable if you end up with a different hash than the official builds.

The various components that are downloaded are in the ./modules directory and include:

We also recommend installing Qubes OS, although there Heads can kexec into any Linux or multiboot kernel.

Notes:

  • Building coreboot's cross compilers can take a while. Luckily this is only done once.
  • Builds are finally reproducible! The reproduciblebuilds tag tracks any regressions.
  • Currently only tested in QEMU, the Thinkpad x230, Librem series and the Chell Chromebook. ** Xen does not work in QEMU. Signing, HOTP, and TOTP do work; see below.
  • Building for the Lenovo X220 requires binary blobs to be placed in the blobs/x220/ folder. See the readme.md file in that folder
  • Building for the Librem 13 v2/v3 or Librem 15 v3/v4 requires binary blobs to be placed in the blobs/librem_skl folder. See the readme.md file in that folder

QEMU:

OS booting can be tested in QEMU using a software TPM. HOTP can be tested by forwarding a USB token from the host to the guest.

For more information and setup instructions, refer to the qemu documentation.

coreboot console messages

The coreboot console messages are stored in the CBMEM region and can be read by the Linux payload with the cbmem --console | less command. There is lots of interesting data about the state of the system.

About

A minimal Linux that runs as a coreboot or LinuxBoot ROM payload to provide a secure, flexible boot environment for laptops, workstations and servers.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published