Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Drives available but getting "Insufficient number of disks available to set up distributed storage" #271

Open
webdock-io opened this issue Mar 23, 2024 · 2 comments
Labels
Feature New feature, not a bug

Comments

@webdock-io
Copy link

webdock-io commented Mar 23, 2024

Following the guide at: https://canonical-microcloud.readthedocs-hosted.com/en/latest/tutorial/get_started/

We have 2 physical NVME drives attached to each VM directly (so 6 drives total across 3 VMs) and they are definitely showing up OK in the VMs as sdb and sdc:

# ls -lah /dev/disk/by-id/
total 0
drwxr-xr-x 2 root root 220 Mar 23 11:54 .
drwxr-xr-x 8 root root 160 Mar 15 16:10 ..
lrwxrwxrwx 1 root root   9 Mar 23 11:54 scsi-0QEMU_QEMU_HARDDISK_lxd_nvme1 -> ../../sdb
lrwxrwxrwx 1 root root  10 Mar 23 11:54 scsi-0QEMU_QEMU_HARDDISK_lxd_nvme1-part1 -> ../../sdb1
lrwxrwxrwx 1 root root  10 Mar 23 11:54 scsi-0QEMU_QEMU_HARDDISK_lxd_nvme1-part9 -> ../../sdb9
lrwxrwxrwx 1 root root   9 Mar 23 11:54 scsi-0QEMU_QEMU_HARDDISK_lxd_nvme2 -> ../../sdc
lrwxrwxrwx 1 root root  10 Mar 23 11:54 scsi-0QEMU_QEMU_HARDDISK_lxd_nvme2-part1 -> ../../sdc1
lrwxrwxrwx 1 root root  10 Mar 23 11:54 scsi-0QEMU_QEMU_HARDDISK_lxd_nvme2-part9 -> ../../sdc9
lrwxrwxrwx 1 root root   9 Mar 23 11:54 scsi-0QEMU_QEMU_HARDDISK_lxd_root -> ../../sda
lrwxrwxrwx 1 root root  10 Mar 23 11:54 scsi-0QEMU_QEMU_HARDDISK_lxd_root-part1 -> ../../sda1
lrwxrwxrwx 1 root root  10 Mar 23 11:54 scsi-0QEMU_QEMU_HARDDISK_lxd_root-part2 -> ../../sda2

But microcloud init just says:

Scanning for eligible servers ...

 Selected "lxdvm2" at "10.1.255.88"
 Selected "lxdvm3" at "10.1.255.168"
 Selected "lxdvm1" at "10.1.255.108"

Insufficient number of disks available to set up distributed storage, skipping at this time
Initializing a new cluster

And skips the ceph setup. How is it checking for available disks and how can this be remedied? We've had zfs pools on these drives before for some other testing but the pools have been destroyed (I didnt do labelclear - maybe that's it?)

Additionally, retrying Microcloud after completing the cluster setup has Microcloud just hanging on the initial "Scanning for eligible servers ..." one would think that it would check if it was already set up and just error out immediately when running it.

I guess I have to tear down the snaps on all VMs and reinstall to try the init operation again?

In general, it would be useful to have information on how to remedy the situation for each step in the Microcloud wizard if something goes wrong or you accidentally make some mistake - even if that remedy is just the easiest steps to clean the already set up systems to you can start over

@webdock-io
Copy link
Author

I guess I had already guessed the remedy - I did a zpool labelclear AND a wipefs on those drives to be sure, tore down everything and started over and now I was able to select drives.

It would be good to abort at the disk step if no disks are "found" by Microcloud maybe with an explanation - this is just a suggestion to improving the wizard, anyway it would have saved me some time :)

Looking forward to playing with this.!

@roosterfish
Copy link
Contributor

That is right. You have to reinstall the snaps and start all over.

As reported in #142 MicroCloud currently doesn't pick up non pristine disks for distributed storage. This depends on canonical/microceph#251 in MicroCeph.

@roosterfish roosterfish added the Feature New feature, not a bug label Mar 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature New feature, not a bug
Projects
None yet
Development

No branches or pull requests

2 participants