-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LXD fails to pickup non-pristine disks #142
Comments
At the moment, MicroCloud won't pick up any partitioned disks. That will definitely change in the near-ish future. |
We're close to having support for partitions on local (zfs) storage, but it seems ceph might take a bit longer: For ZFS, we'll be able to add partition support once canonical/lxd#12537 is merged in LXD. |
@masnax WRT to canonical/lxd#12537 why do we need to ascertain if the partition is mounted, as isn't microcloud only showing empty partitions anyway? |
Because I couldn't add partitions as local storage during the Is there a command I can execute to manually create the local storage pool and add the partitions from the cluster nodes? At least until this new feature is ready? |
Sure, to create a local zfs storage pool like MicroCloud would, you can do the following: Once on each system: lxc storage create local zfs source=${disk_path} --target ${cluster_member_name} And finally, from any system: lxc storage create local zfs |
Thanks for that, extremely helpful! I noticed per the doco that there are default volumes (backups, images) tied to the target systems. Are those required, or should I just skip them? |
There's no way MicroCloud can know if the partitions are empty without LXD's super-privileges. So no it will list every single partition on the system. The list is ripped straight from |
@masnax I commented over at canonical/lxd#12537 (review) |
MicroCeph support for partitions is being tracked here canonical/microceph#251 |
When I follow these instructions to the letter, or even when I add
This is on a TuringPi 2 clusterboard with 4 Turing RK1 nodes (Rockchip 3588 based compute modules with 32GB of eMMC storage). The nodes were freshly imaged and the 3rd partition was newly created on all of them using |
It looks there already is a storage pool called You can run @rmbleeker Have you skipped local storage pool setup during |
I realize that's what it looks like, but it's not the case.
Yes I have. |
Alright, it seems to work when I pick a different approach and slightly alter the commands. I got the idea from the Web UI, which states that when creating a ZFS storage pool, the name of an existing ZFS pool is a valid source. So I created a storage pool with
on each node, filling in the proper disk ID on each node. I then used
to create the local storage, filling in the name of each node in the cluster as the target. Then finally
properly initialized the storage pool, giving it the CREATED stage instead of PENDING or ERRORED. It cost me an extra step which isn't a big deal, but it's still a work around and not a solution in my view. |
Out of curiosity, if you have another partition you're able to test on, I'd be very interested to see if the storage pool can be created with a name other than The setup that eventually worked for you seems to just ignore the existing pool error with the |
There are no other disks or partitions available on the nodes, but since I wasn't far into my project anyway I decided to do some testing and flash the nodes again with a fresh image. I did this twice and set up the cluster again both times. After the first time I used the With all that said and done these tests weren' conclusive. The fact that the issue still occurred on node 2 after applying a fresh image leads me to believe that some remnants of the contents of a partition are left behind when you re-create the partition with exactly the same parameters if the storage device isn't properly overwritten beforehand. But apparently that's not always the case because I could create a new pool without forcing it on 3 of the 4 nodes. In any case I think that perhaps a |
You can already pass |
In the microcloud init screen, the wizard seems to fail to pickup non-pristine disks. It offers to wipe the disk in the next screen, so I assume this is a bug. If I wipe a non-pristine disk with:
sudo wipefs -a /dev/sdb && sudo dd if=/dev/zero of=/dev/sdb bs=4096 count=100 > /dev/null
then microcloud picks up the disk next time the wizard is run.
The text was updated successfully, but these errors were encountered: