Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Instance volume configuration through disk device #12089

Merged
merged 13 commits into from
Sep 25, 2023

Conversation

MusicDin
Copy link
Member

@MusicDin MusicDin force-pushed the feature/initial-config branch 2 times, most recently from 67ee9da to 4a99a5a Compare August 7, 2023 09:25
@MusicDin MusicDin force-pushed the feature/initial-config branch 5 times, most recently from d90e038 to 666f23f Compare August 24, 2023 09:07
@MusicDin
Copy link
Member Author

The current implementation allows an optimized image to be used when initial.* settings are present in the volume configuration, only if the volume configuration matched the pool's default configuration for new volumes.

The initial plan was to first check whether an optimized image already exists for the given volume configuration with initial.* settings and use it if possible. However, this is only applicable if pool's default configuration has changed and an optimized image still exists for the previous pool's default configuration.

Essentially, we have the following three scenarios:

  1. Optimized image does not exist. - In such case, the optimized image for a volume that uses initial settings is created only if matches the pool's default configuration for new volumes. So this is not a special case.

  2. Optimized image exists and matches the pool's default configuration for new volumes.
    Again, nothing special here.

  3. Optimized image exists, but it does NOT match the pool's default configuration.

How to achieve this scenario:

lxc storage create pool zfs
lxc storage set pool volume.zfs.block_mode false # Just to emphasize it's set to false

# A new optimized image will be created - it matches the default pool's configuration.
lxc launch ubuntu:22.04 c1 --storage pool

# Now, if we change the default pool configuration, an optimized image exists, but it no longer
# matches the default pool configuration for new volumes.
lxc storage set pool volume.zfs.block_mode true

We have now two sub-scenarios:

  1. Should we use an existing optimized image if it exists, even if it differs from the pool's default configuration for new volumes, or unpack the image archive directly into the instance's volume?
lxc launch ubuntu:22.04 c2 --storage pool --device root,initial.zfs.block_mode=false
  1. What if the initial settings are present in the volume config, and volume config does NOT match with an existing optimized image, but it matches the pool's default configuration for new volumes?
lxc launch ubuntu:22.04 c2 --storage pool --device root,initial.zfs.block_mode=true

In my opinion, we should always check only the pool's default configuration - because the next instance that is created without initial configuration will replace the existing optimized image.

@tomponline What are your thoughts about this?

@tomponline
Copy link
Member

In my opinion, we should always check only the pool's default configuration - because the next instance that is created without initial configuration will replace the existing optimized image.

Agreed, in scenario 3 "Optimized image exists, but it does NOT match the pool's default configuration." the new instance with initial configuration should use a non-optimized image unpack.

@MusicDin MusicDin marked this pull request as ready for review August 24, 2023 13:12
@MusicDin MusicDin force-pushed the feature/initial-config branch 2 times, most recently from 0eec45a to 45cda22 Compare August 31, 2023 08:10
lxd/device/disk.go Outdated Show resolved Hide resolved
lxd/device/disk.go Outdated Show resolved Hide resolved
lxd/device/disk.go Outdated Show resolved Hide resolved
@MusicDin MusicDin force-pushed the feature/initial-config branch 5 times, most recently from ce4b572 to 67e3670 Compare September 1, 2023 14:47
@tomponline tomponline marked this pull request as draft September 20, 2023 08:27
@MusicDin
Copy link
Member Author

Agree. Thanks.

@MusicDin
Copy link
Member Author

MusicDin commented Sep 20, 2023

So, after some debugging, the issue seems to be in the optimized image, but issue was not introduced with initial configuration - it just happened that we've hit into this issue after adding some tests for zfs.blocksize.

For example:

lxc storage create zfs zfs

# zfs.block_mode is false by default, just to emphasize it
lxc storage set zfs volume.zfs.block_mode=false

lxc launch ubuntu:22.04 c1 -s zfs --no-profiles -d root,initial.zfs.block_mode=true
lxc launch ubuntu:22.04 c2 -s zfs --no-profiles -d root,initial.zfs.block_mode=true -d root,initial.zfs.blocksize=32KiB


zfs list -r zfs/containers -o name,volblocksize,mountpoint
# NAME               VOLBLOCK  MOUNTPOINT
# zfs/containers            -  legacy
# zfs/containers/c1        8K  -
# zfs/containers/c2       32K  -

# ^ OK: We can see that `zfs.blocksize` differs for c1 and c2

Up to this point everything works as expected because neither of containers use an optimized image. However, when we set zfs.block_mode=true on storage pool, an optimized image will be created and used for all containers that match the pool's default settings (which means that block mode and filesystem must match).

# Enable zfs.block_mode on storage
lxc storage set zfs volume.zfs.block_mode=true

lxc launch ubuntu:22.04 c3 -s zfs --no-profiles
lxc launch ubuntu:22.04 c4 -s zfs --no-profiles -d root,initial.zfs.blocksize=32KiB

zfs list -r zfs/containers -o name,volblocksize,mountpoint
# NAME               VOLBLOCK  MOUNTPOINT
# zfs/containers            -  legacy
# zfs/containers/c1        8K  -
# zfs/containers/c2       32K  -
# zfs/containers/c3        8K  -
# zfs/containers/c4        8K  -

# ^ Not OK: We can see that `zfs.blocksize` does not differ for c3 and c4

zfs list -r zfs/images -o name,volblocksize,mountpoint
# NAME                                                                              VOLBLOCK  MOUNTPOINT
# zfs/images                                                                               -  legacy
# zfs/images/be57f822968b4f2831627e74590f887d5945cc7426361780fb3958327a6706be_ext4        8K  -

The same actually happens if we change zfs.blocksize on the storage once an optimized image is created (also tested on lxd-5.17-e5ead86).

# Fresh install of LXD.
lxc storage create zfs zfs
lxc storage set zfs volume.zfs.block_mode=true

lxc launch ubuntu:22.04 c1 -s zfs --no-profiles

# Set zfs.blocksize to 32KiB and create new container.
lxc storage set zfs volume.zfs.blocksize=32KiB
lxc launch ubuntu:22.04 c2 -s zfs --no-profiles

zfs list -r zfs/containers -o name,volblocksize,mountpoint
# NAME               VOLBLOCK  MOUNTPOINT
# zfs/containers            -  legacy
# zfs/containers/c1        8K  -
# zfs/containers/c2        8K  -

zfs list -r zfs/images -o name,volblocksize,mountpoint
# NAME                                                                              VOLBLOCK  MOUNTPOINT
# zfs/images                                                                               -   legacy
# zfs/images/be57f822968b4f2831627e74590f887d5945cc7426361780fb3958327a6706be_ext4        8K  -

We can easily fix initial configuration with for zfs.blocksize - if block sizes differ, we create container from a non-optimized image.

However, I think that fixing normal optimized images (when zfs.blocksize is changed on the storage) will require us to change how image fingerprints are generated because image fingerprint remains the same regardless of the blocksize, preventing new optimized image to be generated if one already exists.

Signed-off-by: Din Music <din.music@canonical.com>
…n extension

Signed-off-by: Din Music <din.music@canonical.com>
Signed-off-by: Din Music <din.music@canonical.com>
…e is created

Signed-off-by: Din Music <din.music@canonical.com>
…oot device with initial values to the instance devices

Signed-off-by: Din Music <din.music@canonical.com>
…guration to instance devices

Signed-off-by: Din Music <din.music@canonical.com>
…tion values

Signed-off-by: Din Music <din.music@canonical.com>
…ation values

Signed-off-by: Din Music <din.music@canonical.com>
@MusicDin MusicDin force-pushed the feature/initial-config branch 2 times, most recently from c7fc5d3 to ab020e0 Compare September 25, 2023 10:50
Signed-off-by: Din Music <din.music@canonical.com>
Signed-off-by: Din Music <din.music@canonical.com>
Signed-off-by: Din Music <din.music@canonical.com>
Signed-off-by: Din Music <din.music@canonical.com>
…ksize differs

Signed-off-by: Din Music <din.music@canonical.com>
@MusicDin MusicDin marked this pull request as ready for review September 25, 2023 13:51
@tomponline
Copy link
Member

Lgtm,thanks!

@tomponline
Copy link
Member

Please can you open an issue about the zfs blocksize problem

@tomponline tomponline merged commit 188c290 into canonical:main Sep 25, 2023
25 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
API Changes to the REST API Documentation Documentation needs updating
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants