Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] minio addon install with serviceAccount.create=true, cluster create failed #7222

Closed
JashBook opened this issue Apr 15, 2024 · 1 comment · Fixed by apecloud/kubeblocks-addons#495
Assignees
Labels
bug severity/major Great chance user will encounter the same problem
Milestone

Comments

@JashBook
Copy link
Collaborator

JashBook commented Apr 15, 2024

Describe the bug
Warning FailedCreate 34s (x14 over 75s) statefulset-controller create Pod minio-cluster-minio-0 in StatefulSet minio-cluster-minio failed error: pods "minio-cluster-minio-0" is forbidden: error looking up service account default/minio-8.0.17: serviceaccount "minio-8.0.17" not found

To Reproduce
Steps to reproduce the behavior:

  1. install minio addon
helm install minio-8.0.17 kubeblocks-addons/minio
  1. create cluster
helm install minio-cluster addons/minio-cluster
  1. See error
kubectl get cluster
NAME            CLUSTER-DEFINITION   VERSION   TERMINATION-POLICY   STATUS     AGE
minio-cluster   minio-8.0.17                   Delete               Creating   51s

 kubectl get sts
NAME                           READY   AGE
minio-cluster-minio            0/1     55s

describe sts

kubectl describe sts minio-cluster-minio
Name:               minio-cluster-minio
Namespace:          default
CreationTimestamp:  Mon, 15 Apr 2024 17:26:52 +0800
Selector:           app.kubernetes.io/instance=minio-cluster,app.kubernetes.io/managed-by=kubeblocks,app.kubernetes.io/name=minio-8.0.17,apps.kubeblocks.io/component-name=minio
Labels:             app.kubernetes.io/component=minio
                    app.kubernetes.io/instance=minio-cluster
                    app.kubernetes.io/managed-by=kubeblocks
                    app.kubernetes.io/name=minio-8.0.17
                    apps.kubeblocks.io/component-name=minio
                    rsm.workloads.kubeblocks.io/controller-generation=1
Annotations:        kubeblocks.io/generation: 1
Replicas:           1 desired | 0 total
Update Strategy:    RollingUpdate
  Partition:        0
Pods Status:        0 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:           app.kubernetes.io/component=minio
                    app.kubernetes.io/instance=minio-cluster
                    app.kubernetes.io/managed-by=kubeblocks
                    app.kubernetes.io/name=minio-8.0.17
                    app.kubernetes.io/version=
                    apps.kubeblocks.io/component-name=minio
  Service Account:  minio-8.0.17
  Containers:
   minio:
    Image:      minio/minio:RELEASE.2021-02-14T04-01-33Z
    Port:       9000/TCP
    Host Port:  0/TCP
    Command:
      /bin/sh
      -ce
      /usr/bin/docker-entrypoint.sh minio -S /etc/minio/certs/ server /export
    Limits:
      cpu:     500m
      memory:  512Mi
    Requests:
      cpu:      500m
      memory:   512Mi
    Liveness:   http-get http://:http/minio/health/live delay=5s timeout=5s period=5s #success=1 #failure=5
    Readiness:  tcp-socket :http delay=5s timeout=1s period=5s #success=1 #failure=5
    Startup:    tcp-socket :http delay=0s timeout=5s period=10s #success=1 #failure=60
    Environment Variables from:
      minio-cluster-minio-env      ConfigMap  Optional: false
      minio-cluster-minio-rsm-env  ConfigMap  Optional: false
    Environment:
      KB_POD_NAME:        (v1:metadata.name)
      KB_POD_UID:         (v1:metadata.uid)
      KB_NAMESPACE:       (v1:metadata.namespace)
      KB_SA_NAME:         (v1:spec.serviceAccountName)
      KB_NODENAME:        (v1:spec.nodeName)
      KB_HOST_IP:         (v1:status.hostIP)
      KB_POD_IP:          (v1:status.podIP)
      KB_POD_IPS:         (v1:status.podIPs)
      KB_HOSTIP:          (v1:status.hostIP)
      KB_PODIP:           (v1:status.podIP)
      KB_PODIPS:          (v1:status.podIPs)
      KB_POD_FQDN:       $(KB_POD_NAME).minio-cluster-minio-headless.$(KB_NAMESPACE).svc
      MINIO_ACCESS_KEY:  <set to the key 'accesskey' in secret 'minio-cluster-conn-credential'>  Optional: false
      MINIO_SECRET_KEY:  <set to the key 'secretkey' in secret 'minio-cluster-conn-credential'>  Optional: false
    Mounts:
      /export from data (rw)
  Volumes:
   data:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
Volume Claims:
  Name:          data
  StorageClass:  
  Labels:        apps.kubeblocks.io/vct-name=data
                 kubeblocks.io/volume-type=data
  Annotations:   <none>
  Capacity:      1Gi
  Access Modes:  [ReadWriteOnce]
Events:
  Type     Reason            Age                 From                    Message
  ----     ------            ----                ----                    -------
  Normal   SuccessfulCreate  75s                 statefulset-controller  create Claim data-minio-cluster-minio-0 Pod minio-cluster-minio-0 in StatefulSet minio-cluster-minio success
  Warning  FailedCreate      34s (x14 over 75s)  statefulset-controller  create Pod minio-cluster-minio-0 in StatefulSet minio-cluster-minio failed error: pods "minio-cluster-minio-0" is forbidden: error looking up service account default/minio-8.0.17: serviceaccount "minio-8.0.17" not found

get sa

kubectl get sa minio-8.0.17
Error from server (NotFound): serviceaccounts "minio-8.0.17" not found

get cd yaml

kubectl get cd minio-8.0.17 -oyaml
apiVersion: apps.kubeblocks.io/v1alpha1
kind: ClusterDefinition
metadata:
  annotations:
    meta.helm.sh/release-name: minio-8.0.17
    meta.helm.sh/release-namespace: default
  creationTimestamp: "2024-04-15T09:26:13Z"
  finalizers:
  - clusterdefinition.kubeblocks.io/finalizer
  generation: 2
  labels:
    app.kubernetes.io/managed-by: Helm
  name: minio-8.0.17
  resourceVersion: "3258722"
  uid: d980bda7-44f1-496f-b1a3-4dd34cfb736e
spec:
  componentDefs:
  - characterType: minio
    name: minio
    podSpec:
      containers:
      - command:
        - /bin/sh
        - -ce
        - /usr/bin/docker-entrypoint.sh minio -S /etc/minio/certs/ server /export
        env:
        - name: MINIO_ACCESS_KEY
          valueFrom:
            secretKeyRef:
              key: accesskey
              name: $(CONN_CREDENTIAL_SECRET_NAME)
        - name: MINIO_SECRET_KEY
          valueFrom:
            secretKeyRef:
              key: secretkey
              name: $(CONN_CREDENTIAL_SECRET_NAME)
        image: minio/minio:RELEASE.2021-02-14T04-01-33Z
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 5
          httpGet:
            path: /minio/health/live
            port: http
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 5
          successThreshold: 1
          timeoutSeconds: 5
        name: minio
        ports:
        - containerPort: 9000
          name: http
          protocol: TCP
        readinessProbe:
          failureThreshold: 5
          initialDelaySeconds: 5
          periodSeconds: 5
          successThreshold: 1
          tcpSocket:
            port: http
          timeoutSeconds: 1
        resources: {}
        startupProbe:
          failureThreshold: 60
          periodSeconds: 10
          successThreshold: 1
          tcpSocket:
            port: http
          timeoutSeconds: 5
        volumeMounts:
        - mountPath: /export
          name: data
      securityContext:
        fsGroup: 1000
        runAsGroup: 1000
        runAsUser: 1000
      serviceAccountName: minio-8.0.17
    service:
      ports:
      - name: http
        port: 9000
        protocol: TCP
        targetPort: 9000
    statefulSpec:
      updateStrategy: BestEffortParallel
    volumeTypes:
    - name: data
      type: data
    workloadType: Stateful
  connectionCredential:
    accesskey: $(RANDOM_PASSWD)
    endpoint: $(SVC_FQDN):$(SVC_PORT_http)
    host: $(SVC_FQDN)
    port: $(SVC_PORT_http)
    secretkey: $(RANDOM_PASSWD)
  type: minio
status:
  observedGeneration: 2
  phase: Available

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]
kbcli version 
Kubernetes: v1.26.3
KubeBlocks: 0.8.3
kbcli: 0.8.3

Additional context
Add any other context about the problem here.

@JashBook JashBook added the bug label Apr 30, 2024
@JashBook JashBook transferred this issue from apecloud/kubeblocks-addons Apr 30, 2024
@JashBook JashBook added the severity/major Great chance user will encounter the same problem label Apr 30, 2024
@JashBook
Copy link
Collaborator Author

KubeBlocks 0.9 error

kbcli version
Kubernetes: v1.26.14-gke.1044000
KubeBlocks: 0.9.0-beta.17
kbcli: 0.9.0-beta.4
helm install milvus-minio kubeblocks-addons/minio-cluster
2024-04-30T07:18:39.653Z	ERROR	Reconciler error	{"controller": "instanceset", "controllerGroup": "workloads.kubeblocks.io", "controllerKind": "InstanceSet", "InstanceSet": {"name":"milvus-minio-minio","namespace":"default"}, "namespace": "default", "name": "milvus-minio-minio", "reconcileID": "20b3b4e8-800f-40e8-af84-6f3bd3b36145", "error": "pods \"milvus-minio-minio-0\" is forbidden: error looking up service account default/minio: serviceaccount \"minio\" not found"}
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.5/pkg/internal/controller/controller.go:329
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.5/pkg/internal/controller/controller.go:266
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.5/pkg/internal/controller/controller.go:227
2024-04-30T07:19:20.680Z	ERROR	Reconciler error	{"controller": "instanceset", "controllerGroup": "workloads.kubeblocks.io", "controllerKind": "InstanceSet", "InstanceSet": {"name":"milvus-minio-minio","namespace":"default"}, "namespace": "default", "name": "milvus-minio-minio", "reconcileID": "4093bae0-67d7-4fe8-9d78-51f52791cf1d", "error": "pods \"milvus-minio-minio-0\" is forbidden: error looking up service account default/minio: serviceaccount \"minio\" not found"}
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.5/pkg/internal/controller/controller.go:329
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.5/pkg/internal/controller/controller.go:266
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.5/pkg/internal/controller/controller.go:227
2024-04-30T07:20:42.660Z	ERROR	Reconciler error	{"controller": "instanceset", "controllerGroup": "workloads.kubeblocks.io", "controllerKind": "InstanceSet", "InstanceSet": {"name":"milvus-minio-minio","namespace":"default"}, "namespace": "default", "name": "milvus-minio-minio", "reconcileID": "7da839e4-9781-4603-9131-fc7e6b3f551f", "error": "pods \"milvus-minio-minio-0\" is forbidden: error looking up service account default/minio: serviceaccount \"minio\" not found"}
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.5/pkg/internal/controller/controller.go:329
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.5/pkg/internal/controller/controller.go:266
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.5/pkg/internal/controller/controller.go:227

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug severity/major Great chance user will encounter the same problem
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants