-
Notifications
You must be signed in to change notification settings - Fork 931
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Instance volume configuration through disk device #12089
Conversation
67ee9da
to
4a99a5a
Compare
d90e038
to
666f23f
Compare
The current implementation allows an optimized image to be used when The initial plan was to first check whether an optimized image already exists for the given volume configuration with Essentially, we have the following three scenarios:
How to achieve this scenario: lxc storage create pool zfs
lxc storage set pool volume.zfs.block_mode false # Just to emphasize it's set to false
# A new optimized image will be created - it matches the default pool's configuration.
lxc launch ubuntu:22.04 c1 --storage pool
# Now, if we change the default pool configuration, an optimized image exists, but it no longer
# matches the default pool configuration for new volumes.
lxc storage set pool volume.zfs.block_mode true We have now two sub-scenarios:
lxc launch ubuntu:22.04 c2 --storage pool --device root,initial.zfs.block_mode=false
lxc launch ubuntu:22.04 c2 --storage pool --device root,initial.zfs.block_mode=true In my opinion, we should always check only the pool's default configuration - because the next instance that is created without initial configuration will replace the existing optimized image. @tomponline What are your thoughts about this? |
Agreed, in scenario 3 "Optimized image exists, but it does NOT match the pool's default configuration." the new instance with initial configuration should use a non-optimized image unpack. |
666f23f
to
4f7f907
Compare
0eec45a
to
45cda22
Compare
ce4b572
to
67e3670
Compare
Agree. Thanks. |
So, after some debugging, the issue seems to be in the optimized image, but issue was not introduced with initial configuration - it just happened that we've hit into this issue after adding some tests for For example: lxc storage create zfs zfs
# zfs.block_mode is false by default, just to emphasize it
lxc storage set zfs volume.zfs.block_mode=false
lxc launch ubuntu:22.04 c1 -s zfs --no-profiles -d root,initial.zfs.block_mode=true
lxc launch ubuntu:22.04 c2 -s zfs --no-profiles -d root,initial.zfs.block_mode=true -d root,initial.zfs.blocksize=32KiB
zfs list -r zfs/containers -o name,volblocksize,mountpoint
# NAME VOLBLOCK MOUNTPOINT
# zfs/containers - legacy
# zfs/containers/c1 8K -
# zfs/containers/c2 32K -
# ^ OK: We can see that `zfs.blocksize` differs for c1 and c2 Up to this point everything works as expected because neither of containers use an optimized image. However, when we set # Enable zfs.block_mode on storage
lxc storage set zfs volume.zfs.block_mode=true
lxc launch ubuntu:22.04 c3 -s zfs --no-profiles
lxc launch ubuntu:22.04 c4 -s zfs --no-profiles -d root,initial.zfs.blocksize=32KiB
zfs list -r zfs/containers -o name,volblocksize,mountpoint
# NAME VOLBLOCK MOUNTPOINT
# zfs/containers - legacy
# zfs/containers/c1 8K -
# zfs/containers/c2 32K -
# zfs/containers/c3 8K -
# zfs/containers/c4 8K -
# ^ Not OK: We can see that `zfs.blocksize` does not differ for c3 and c4
zfs list -r zfs/images -o name,volblocksize,mountpoint
# NAME VOLBLOCK MOUNTPOINT
# zfs/images - legacy
# zfs/images/be57f822968b4f2831627e74590f887d5945cc7426361780fb3958327a6706be_ext4 8K - The same actually happens if we change # Fresh install of LXD.
lxc storage create zfs zfs
lxc storage set zfs volume.zfs.block_mode=true
lxc launch ubuntu:22.04 c1 -s zfs --no-profiles
# Set zfs.blocksize to 32KiB and create new container.
lxc storage set zfs volume.zfs.blocksize=32KiB
lxc launch ubuntu:22.04 c2 -s zfs --no-profiles
zfs list -r zfs/containers -o name,volblocksize,mountpoint
# NAME VOLBLOCK MOUNTPOINT
# zfs/containers - legacy
# zfs/containers/c1 8K -
# zfs/containers/c2 8K -
zfs list -r zfs/images -o name,volblocksize,mountpoint
# NAME VOLBLOCK MOUNTPOINT
# zfs/images - legacy
# zfs/images/be57f822968b4f2831627e74590f887d5945cc7426361780fb3958327a6706be_ext4 8K - We can easily fix initial configuration with for However, I think that fixing normal optimized images (when |
870aa17
to
d3f6815
Compare
Signed-off-by: Din Music <din.music@canonical.com>
…n extension Signed-off-by: Din Music <din.music@canonical.com>
Signed-off-by: Din Music <din.music@canonical.com>
…e is created Signed-off-by: Din Music <din.music@canonical.com>
…oot device with initial values to the instance devices Signed-off-by: Din Music <din.music@canonical.com>
…guration to instance devices Signed-off-by: Din Music <din.music@canonical.com>
…tion values Signed-off-by: Din Music <din.music@canonical.com>
…ation values Signed-off-by: Din Music <din.music@canonical.com>
c7fc5d3
to
ab020e0
Compare
Signed-off-by: Din Music <din.music@canonical.com>
Signed-off-by: Din Music <din.music@canonical.com>
Signed-off-by: Din Music <din.music@canonical.com>
Signed-off-by: Din Music <din.music@canonical.com>
…ksize differs Signed-off-by: Din Music <din.music@canonical.com>
ab020e0
to
61894c9
Compare
Lgtm,thanks! |
Please can you open an issue about the zfs blocksize problem |
Specification: https://discourse.ubuntu.com/t/instance-volume-configuration-through-disk-device/36762