-
-
Notifications
You must be signed in to change notification settings - Fork 322
Versioning based on btrfs snapshots #4221
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Get rid of the else branch by moving common code out of if-else.
... but notice the many TODOs
on btrfs filesystem with user_subvol_rm_allowed mount option set.
|
I believe the plan is to use something similar to ostree or git for snapshots, not anything tied to a specific filesystem. |
|
Both proposals wouldn't work: OSTree is designed for providing read-only filesystem-trees. This is unsuitable for wine-prefixes. git is designed for text file repositories and doesn't scale well with large or a lot of binary files. |
Honestly, this is a pretty sophisticated solution: btrfs is the default for a bunch of modern distros, the snapshot feat is mature and stable and it uses copy-on-write to make them pretty lightweight and is extremely secure due to guaranteed atomicity, this implementation can even be extended to support prefix deduplication. I also think we should have a fallback for other file-systems (maybe rsync could be a good fit, I have seen other projects use it as a fallback for fs snapshots, not atomic tho) Like the OP said, Git would not be a good fit for this. I am not sure about ostree tho, do you have anything about it I can take a look? Maybe we could design the feature based in interfaces where these multiples backend could work from. |
|
Hi all, This PR already lays the foundation for supporting multiple versioning backends. Currently, it implements btrfs and the existing FVS versioning, but the architecture could easily extend to other systems like XFS, ZFS, or potentially even OSTree (?) in the future. One particularly useful enhancement this enables is the ability to delete specific snapshots - something that, to my knowledge, isn't possible with the current FVS system, but works seamlessly with btrfs snapshots. At the moment it isn't provided via the GUI. It would be a follow-up. As it stands, this PR delivers significant benefits for btrfs users: reliable, lightweight snapshots and efficient bottle duplication through copy-on-write. I'd be particularly interested to hear from @mirkobrombin, who showed interest in the original PR implementation. |
Yes, I think the PR is solid for BTRFS and the current FVS versioning, but I think we could actually improve the architecture itself (could be in a future PR ofc). I have been thinking about this for a couple days and making some notes as I reach some conclusions (ps I do my notes via voice and and resume everything via AI, so ignore if it sounds too formal or something like that, I am a portuguese speaker so the robot is not very good with tone): here’s a practical way to make snapshots in Bottles easier to extend beyond Btrfs, without scattering filesystem logic all over the app. The idea is simple: one clean interface for versioning, small backend implementations for each filesystem, and a registry that picks the right one at runtime. This keeps the UI predictable, reduces risk, and makes future backends (ZFS, LVM, etc.) straightforward. What we’re aiming for
The building blocks
Choosing the backend
This keeps logic out of the UI and bottle lifecycle. The app just asks the registry and delegates. Where it plugs into Bottles
In the UI, call capabilities() to decide which buttons to show. For example, only show “Lightweight clone” when the backend supports writable snapshots and managed containers. How to roll this out
Notes for the current Btrfs PR
Why this is worth it
This approach respects the work in the Btrfs PR, gives us a solid foundation for more filesystems, and keeps Bottles maintainable as we grow. |
|
Hi, I have tested the fallback on non btrfs filesystems using an ext4 loop mount, except for snapshot restoring. The GUI does not show the created snapshots. This happens for me, both on my development flatpak and the official 60.1 flatpak from flathub. Can anyone please check, if it's a local problem or a general issue? @evertonstz about your notes for the current PR:
At the moment all work is delegated to the
This is already done, isn't it? The modification time of the snapshot directory is the creation time of the snapshot.
Is this a Python feature? Can you point me to some documentation?
Should I add the documentation to the versioning manager? |
Not a built‑in magic feature; it’s a pattern. Goal: group multi-step filesystem changes so you can roll them back if something fails mid-way. You could do this in multiple ways, ex: logging your steps in a journal and trigger a recovery to undo these steps in case of a fail (or trigger a full recovery in case of a crash). Lifecycle with sqlite would be something like this (this is backend agnostic btw, would work for zfs, btrfs, etc):
|
| # Try to delete the subvolume as a normal directory tree. This is | ||
| # in particular needed, if the btrfs filesystem is not mounted with | ||
| # 'user_subvol_rm_allowed' option. | ||
| shutil.rmtree(path) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@evertonstz Please describe, how you would rollback shutil.rmtree(), that fails in the middle, in your transaction proposal.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The journaling system is not specifically about rolling back transactions, its just to keep tabs on what was already performed in the file system, with that information you can decide what you wanna do with it, ex: rollback, resume, etc.
| # Internal subvolumes created at initialization time: | ||
| _internal_subvolumes = [ | ||
| "cache", | ||
| ] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm considering to get rid of this _internal_subvolumes. The idea is to save some disk space, when creating snapshots. However it significantly increases the complexity of the code. It makes it prone to have partially performed operations, who's proper handling is complex. I'm not sure, but it might have inspired @evertonstz to implement a semi transaction system, which seems overkill to me. Filesystems are no acid databases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed—filesystems aren’t ACID databases, so we shouldn’t build heavyweight transactional machinery. Still, it’s worth adding a light safety net around the few risky operations.
My propose: for minimal crash safety: before a risky operation, write a tiny intent file (e.g. .versioning-intent.json containing { "op": "restore", "target": "" }), and delete it on success. On startup, if the file still exists, we can either auto-complete the operation or revert, or at least surface a clear message to the user. This isn’t a transaction system, just a breadcrumb.
Why this matters: an orphan/rogue subvolume left behind after a failed swap still consumes disk space (even with CoW it retains its extents) and clutters the layout. It’s trivial for us to delete, but a user with little filesystem knowledge may not even know it exists. A tiny intent marker prevents silent leftovers and improves trust without adding complexity. It's a win-win IMO.
Also, keep in mind I'm just a contributor, nothing I proposed here is needed for your PR to be merged, you just need to convince the maintainer. I might be a little picky about security layers, but on scale corner cases will happen.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I definitely need some feedback of a maintainer:
- Shall I remove the internal subvolumes in order to keep the code simple? I have done it in commit e4f90bc, but I can drop or revert it.
- The idea of "intent files", to improve reliability of more broader filesystem changes against crashes or power outages, seems very good. Afaik nothing in this regard is implemented in bottles. In my opinion this should be implemented as an extension of the
Tasksystem. It would be out of scope of this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed—filesystems aren’t ACID databases, so we shouldn’t build heavyweight transactional machinery. Still, it’s worth adding a light safety net around the few risky operations.
My propose: for minimal crash safety: before a risky operation, write a tiny intent file (e.g. .versioning-intent.json containing { "op": "restore", "target": "" }), and delete it on success. On startup, if the file still exists, we can either auto-complete the operation or revert, or at least surface a clear message to the user. This isn’t a transaction system, just a breadcrumb.
Why this matters: an orphan/rogue subvolume left behind after a failed swap still consumes disk space (even with CoW it retains its extents) and clutters the layout. It’s trivial for us to delete, but a user with little filesystem knowledge may not even know it exists. A tiny intent marker prevents silent leftovers and improves trust without adding complexity. It's a win-win IMO.
Also, keep in mind I'm just a contributor, nothing I proposed here is needed for your PR to be merged, you just need to convince the maintainer. I might be a little picky about security layers, but on scale corner cases will happen.
I definitely agree with this
This simplifies the code significantly. The only downside is, that the snapshots will consume a bit more disk space.
Description
I actually like the bottles application for managing wine prefixes. However it itches me, that I can not create the bottles as btrfs subvolumes, which I want to use it with my backup solution, to backup/restore bottles independently. I also tried to create the subvolume in place before creating a bottle, but bottles is appending a random number to the path, if the bottle directory already exists.
I went a bit over the top and implemented it further, until I can create and restore bottle snapshots using the bottles GUI. With all updates added to this PR, I think this is ready to be merged.
This is a rework of #3420, that can not be reopened for github reasons. Compared to that PR the commits have been rebased on bottlesdev main, the flatpak module btrfs-progs has been reworked and updated to current version and one small bug fix have been added.
Type of change
How Has This Been Tested?