Skip to content

Commit

Permalink
Merge pull request #5 from Vicente-Cheng/misc-improvement
Browse files Browse the repository at this point in the history
misc improvement
  • Loading branch information
bk201 authored Sep 18, 2024
2 parents 840ec7c + 94e7c8c commit f14e909
Show file tree
Hide file tree
Showing 173 changed files with 8,564 additions and 2,737 deletions.
13 changes: 3 additions & 10 deletions Dockerfile.dapper
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@

FROM registry.suse.com/bci/golang:1.22
FROM golang:1.22.7-bookworm

ARG DAPPER_HOST_ARCH
ENV HOST_ARCH=${DAPPER_HOST_ARCH} ARCH=${DAPPER_HOST_ARCH}

RUN zypper -n rm container-suseconnect && \
zypper -n install git curl docker gzip tar wget awk make binutils
RUN apt update
RUN apt install -y git curl gzip tar gawk docker wget make binutils gcc linux-libc-dev gcc-11-aarch64-linux-gnu

## install golangci
RUN curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s v1.57.1
Expand All @@ -15,13 +15,6 @@ RUN curl -sSfL https://github.com/docker/buildx/releases/download/v0.13.1/buildx
chmod +x buildx-v0.13.1.linux-${ARCH} && \
mv buildx-v0.13.1.linux-${ARCH} /usr/local/bin/buildx

## install controller-gen
RUN go install sigs.k8s.io/controller-tools/cmd/controller-gen@v0.6.2


# install openapi-gen
RUN go install k8s.io/code-generator/cmd/openapi-gen@v0.23.7

ENV DAPPER_ENV REPO TAG
ENV DAPPER_SOURCE /go/src/github.com/harvester/csi-driver-lvm
ENV DAPPER_OUTPUT ./bin
Expand Down
97 changes: 40 additions & 57 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,82 +1,65 @@
# csi-driver-lvm #
# Harvester-csi-driver-lvm

CSI DRIVER LVM utilizes local storage of Kubernetes nodes to provide persistent storage for pods.
Harvester-CSI-Driver-LVM is derived from [metal-stack/csi-driver-lvm](https://github.com/metal-stack/csi-driver-lvm).

It automatically creates hostPath based persistent volumes on the nodes.
## Introduction

Underneath it creates a LVM logical volume on the local disks. A comma-separated list of grok pattern, which disks to use must be specified.
Harvester-CSI-Driver-LVM utilizes local storage to provide persistent storage for workloads (Usually VM workloads). It will make the VM unable to be migrated to other nodes, but it can provide better performance.

This CSI driver is derived from [csi-driver-host-path](https://github.com/kubernetes-csi/csi-driver-host-path) and [csi-lvm](https://github.com/metal-stack/csi-lvm)
Before you use it, you should have the pre-established Volume Group (VG) on that node. The VG name will be specified in the StorageClass.

## Currently it can create, delete, mount, unmount and resize block and filesystem volumes via lvm ##
The Harvester-CSI-Driver-LVM provides the following features:
- OnDemand Creation of Logical Volume (LV).
- Support LVM type Striped and DM-Thin.
- Support for Raw Block Volume.
- Support Volume Expansion.
- Support Volume Snapshot.
- Support Volume Clone.

For the special case of block volumes, the filesystem-expansion has to be performed by the app using the block device
**NOTE**: The Snapshot/Clone feature only works on the same nodes. Clone works for different Volume Groups.

## Installation ##

**Helm charts for installation are located in a separate repository called [helm-charts](https://github.com/metal-stack/helm-charts). If you would like to contribute to the helm chart, please raise an issue or pull request there.**
You can use Helm to install the Harvester-CSI-Driver-LVM by remote repo or local helm chart files.

You have to set the devicePattern for your hardware to specify which disks should be used to create the volume group.
1. Install the Harvester-CSI-Driver-LVM locally:

```bash
helm install --repo https://helm.metal-stack.io mytest helm/csi-driver-lvm --set lvm.devicePattern='/dev/nvme[0-9]n[0-9]'
```
$ git clone https://github.com/harvester/csi-driver-lvm.git
$ cd csi-driver-lvm/deploy
$ helm install harvester-lvm-csi-driver charts/ -n harvester-system
```

Now you can use one of following storageClasses:
2. Install the Harvester-CSI-Driver-LVM by remote repo:

* `csi-driver-lvm-linear`
* `csi-driver-lvm-mirror`
* `csi-driver-lvm-striped`
```
$ helm repo add harvester https://charts.harvesterhci.io
$ helm install harvester/harvester-lvm-csi-driver -n harvester-system
```

To get the previous old and now deprecated `csi-lvm-sc-linear`, ... storageclasses, set helm-chart value `compat03x=true`.
After the installation, you can check the status of the following pods:
```
$ kubectl get pods -A |grep harvester-csi-driver-lvm
harvester-system harvester-csi-driver-lvm-controller-0 4/4 Running 0 3h2m
harvester-system harvester-csi-driver-lvm-plugin-ctlgp 3/3 Running 1 (14h ago) 14h
harvester-system harvester-csi-driver-lvm-plugin-qxxqs 3/3 Running 1 (14h ago) 14h
harvester-system harvester-csi-driver-lvm-plugin-xktx2 3/3 Running 0 14h
```

## Migration ##
The CSI driver will be installed in the `harvester-system` namespace and provision to each node.

If you want to migrate your existing PVC to / from csi-driver-lvm, you can use [korb](https://github.com/BeryJu/korb).
After installation, you can refer to the `examples` directory for some example CRDs for usage.

### Todo ###

* implement CreateSnapshot(), ListSnapshots(), DeleteSnapshot()


### Test ###

```bash
kubectl apply -f examples/csi-pvc-raw.yaml
kubectl apply -f examples/csi-pod-raw.yaml


kubectl apply -f examples/csi-pvc.yaml
kubectl apply -f examples/csi-app.yaml
* Implement the unittest
* Implement the webhook for the validation

kubectl delete -f examples/csi-pod-raw.yaml
kubectl delete -f examples/csi-pvc-raw.yaml
### HowTo Build

kubectl delete -f examples/csi-app.yaml
kubectl delete -f examples/csi-pvc.yaml
```

### Development ###

In order to run the integration tests locally, you need to create to loop devices on your host machine. Make sure the loop device mount paths are not used on your system (default path is `/dev/loop10{0,1}`).

You can create these loop devices like this:

```bash
for i in 100 101; do fallocate -l 1G loop${i}.img ; sudo losetup /dev/loop${i} loop${i}.img; done
sudo losetup -a
# use this for recreation or cleanup
# for i in 100 101; do sudo losetup -d /dev/loop${i}; rm -f loop${i}.img; done
$ make
```

You can then run the tests against a kind cluster, running:

```bash
make test
```

To recreate or cleanup the kind cluster:

```bash
make test-cleanup
```
The above command will execute the validation and build the target Image.
You can define your REPO and TAG with ENV `REPO` and `TAG`.
141 changes: 141 additions & 0 deletions cmd/provisioner/clonelv.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,141 @@
package main

import (
"fmt"
"os"
"strings"
"syscall"
"time"

"github.com/urfave/cli/v2"
"k8s.io/klog/v2"

lvm "github.com/harvester/csi-driver-lvm/pkg/lvm"
)

func cloneLVCmd() *cli.Command {
return &cli.Command{
Name: "clonelv",
Flags: []cli.Flag{
&cli.StringFlag{
Name: flagSrcLVName,
Usage: "Required. Source LV name.",
},
&cli.StringFlag{
Name: flagSrcVGName,
Usage: "Required. Source VG name",
},
&cli.StringFlag{
Name: flagSrcType,
Usage: "Required. Source type, can be either striped or dm-thin",
},
&cli.StringFlag{
Name: flagLVName,
Usage: "Required. Target LV",
},
&cli.StringFlag{
Name: flagVGName,
Usage: "Required. the name of the volumegroup",
},
&cli.StringFlag{
Name: flagLVMType,
Usage: "Required. type of lvs, can be either striped or dm-thin",
},
&cli.Uint64Flag{
Name: flagLVSize,
Usage: "Required. The size of the lv in MiB",
},
},
Action: func(c *cli.Context) error {
if err := clonelv(c); err != nil {
klog.Fatalf("Error creating lv: %v", err)
return err
}
return nil
},
}
}

func clonelv(c *cli.Context) error {
srcLvName := c.String(flagSrcLVName)
if srcLvName == "" {
return fmt.Errorf("invalid empty flag %v", flagSrcLVName)
}
srcVgName := c.String(flagSrcVGName)
if srcVgName == "" {
return fmt.Errorf("invalid empty flag %v", flagSrcVGName)
}
srcType := c.String(flagSrcType)
if srcType == "" {
return fmt.Errorf("invalid empty flag %v", flagSrcType)
}
dstLV := c.String(flagLVName)
if dstLV == "" {
return fmt.Errorf("invalid empty flag %v", flagLVName)
}
dstVGName := c.String(flagVGName)
if dstVGName == "" {
return fmt.Errorf("invalid empty flag %v", flagVGName)
}
dstLVType := c.String(flagLVMType)
if dstLVType == "" {
return fmt.Errorf("invalid empty flag %v", flagLVMType)
}
dstSize := c.Uint64(flagLVSize)
if dstSize == 0 {
return fmt.Errorf("invalid empty flag %v", flagLVSize)
}

klog.Infof("Clone from src: %s, to dst: %s/%s", srcLvName, dstVGName, dstLV)

if !lvm.VgExists(dstVGName) {
lvm.VgActivate()
time.Sleep(1 * time.Second) // jitter
if !lvm.VgExists(dstVGName) {
return fmt.Errorf("vg %s does not exist, please check the corresponding VG is created", dstVGName)
}
}

klog.Infof("clone lv %s, vg: %s, type: %s", srcLvName, srcVgName, srcType)
if strings.HasPrefix(srcLvName, snapshotPrefix) && srcType == lvm.DmThinType && srcType == dstLVType && srcVgName == dstVGName {
// special case for clone dm-thin, so activate the target
_, err := lvm.CreateSnapshot(dstLV, srcLvName, srcVgName, int64(dstSize), dstLVType, createSnapshotForClone)
if err != nil {
return fmt.Errorf("unable to create snapshot: %w", err)
}
klog.Infof("lv: %s/%s cloned from %s", dstVGName, dstLV, srcLvName)
return nil
}

// check source dev
srcDev := fmt.Sprintf("/dev/%s/%s", srcVgName, srcLvName)
src, err := os.OpenFile(srcDev, syscall.O_RDONLY|syscall.O_DIRECT, 0)
if err != nil {
return fmt.Errorf("unable to open source device: %w", err)
}
defer src.Close()

output, err := lvm.CreateLVS(dstVGName, dstLV, dstSize, dstLVType)
if err != nil {
return fmt.Errorf("unable to create lv: %w output:%s", err, output)
}
klog.Infof("lv: %s created, vg:%s size:%d type:%s", dstLV, dstVGName, dstSize, dstLVType)

dst, err := os.OpenFile(fmt.Sprintf("/dev/%s/%s", dstVGName, dstLV), syscall.O_WRONLY|syscall.O_DIRECT, 0)
if err != nil {
return fmt.Errorf("unable to open target device: %w", err)
}
defer dst.Close()

// Clone the source device to the target device
if err := lvm.CloneDevice(src, dst); err != nil {
return fmt.Errorf("unable to clone device: %w", err)
}
if err := dst.Sync(); err != nil {
return fmt.Errorf("unable to sync target device: %w", err)
}

klog.Infof("lv: %s/%s cloned from %s", dstVGName, dstLV, srcDev)

return nil
}
22 changes: 9 additions & 13 deletions cmd/provisioner/createlv.go
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ package main

import (
"fmt"
"time"

"github.com/urfave/cli/v2"
"k8s.io/klog/v2"
Expand Down Expand Up @@ -29,10 +30,6 @@ func createLVCmd() *cli.Command {
Name: flagLVMType,
Usage: "Required. type of lvs, can be either striped or mirrored",
},
&cli.StringFlag{
Name: flagDevicesPattern,
Usage: "Required. comma-separated grok patterns of the physical volumes to use.",
},
},
Action: func(c *cli.Context) error {
if err := createLV(c); err != nil {
Expand Down Expand Up @@ -61,19 +58,18 @@ func createLV(c *cli.Context) error {
if lvmType == "" {
return fmt.Errorf("invalid empty flag %v", flagLVMType)
}
devicesPattern := c.String(flagDevicesPattern)
if devicesPattern == "" {
return fmt.Errorf("invalid empty flag %v", flagDevicesPattern)
}

klog.Infof("create lv %s size:%d vg:%s devicespattern:%s type:%s", lvName, lvSize, vgName, devicesPattern, lvmType)
klog.Infof("create lv %s size:%d vg:%s type:%s", lvName, lvSize, vgName, lvmType)

output, err := lvm.CreateVG(vgName, devicesPattern)
if err != nil {
return fmt.Errorf("unable to create vg: %w output:%s", err, output)
if !lvm.VgExists(vgName) {
lvm.VgActivate()
time.Sleep(1 * time.Second) // jitter
if !lvm.VgExists(vgName) {
return fmt.Errorf("vg %s does not exist, please check the corresponding VG is created", vgName)
}
}

output, err = lvm.CreateLVS(vgName, lvName, lvSize, lvmType)
output, err := lvm.CreateLVS(vgName, lvName, lvSize, lvmType)
if err != nil {
return fmt.Errorf("unable to create lv: %w output:%s", err, output)
}
Expand Down
Loading

0 comments on commit f14e909

Please sign in to comment.