Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

You need to expand the vg, but have enough storage #269

Open
iveskim opened this issue Dec 31, 2024 · 2 comments
Open

You need to expand the vg, but have enough storage #269

iveskim opened this issue Dec 31, 2024 · 2 comments

Comments

@iveskim
Copy link

iveskim commented Dec 31, 2024

Ⅰ. Issue Description

My volume group has enough space, but open-locl lvm creation prompts insufficient space

err info

W1231 09:21:56.435723       1 controller.go:958] Retrying syncing claim "7fd51522-6d52-415e-b6f1-bfabd4bbda0a", failure 8
E1231 09:21:56.435775       1 controller.go:981] error syncing claim "7fd51522-6d52-415e-b6f1-bfabd4bbda0a": failed to provision volume with StorageClass "open-local-lvm-xfs": rpc error: code = Internal desc = CreateVolume: fail to schedule LVM local-7fd51522-6d52-415e-b6f1-bfabd4bbda0a: rpc error: code = InvalidArgument desc = lvm schedule with error Get Response StatusCode 500, Response: failed to allocate local storage for pvc ns-f605cad6af0a/pvc-12a3875af957483986e0-200g: [multipleVGs]not enough lv storage on c1-172-16-0-2/open-local-pool-0, requested size 200Gi,  free size 24572Mi, strategiy spread. you need to expand the vg
I1231 09:21:56.435836       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"ns-f605cad6af0a", Name:"pvc-12a3875af957483986e0-200g", UID:"7fd51522-6d52-415e-b6f1-bfabd4bbda0a", APIVersion:"v1", ResourceVersion:"109570920", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "open-local-lvm-xfs": rpc error: code = Internal desc = CreateVolume: fail to schedule LVM local-7fd51522-6d52-415e-b6f1-bfabd4bbda0a: rpc error: code = InvalidArgument desc = lvm schedule with error Get Response StatusCode 500, Response: failed to allocate local storage for pvc ns-f605cad6af0a/pvc-12a3875af957483986e0-200g: [multipleVGs]not enough lv storage on c1-172-16-0-2/open-local-pool-0, requested size 200Gi,  free size 24572Mi, strategiy spread. you need to expand the vg

Ⅱ. Describe what happened

pvc info

Name:          pvc-12a3875af957483986e0-200g
Namespace:     ns-f605cad6af0a
StorageClass:  open-local-lvm-xfs
Status:        Pending
Volume:
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: local.csi.aliyun.com
               volume.kubernetes.io/storage-provisioner: local.csi.aliyun.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Used By:       12a3875af957483986e0-76b5bdcc56-5pjfp
Events:
  Type     Reason                Age                   From                                                                                              Message
  ----     ------                ----                  ----                                                                                              -------
  Normal   WaitForFirstConsumer  35m                   persistentvolume-controller                                                                       waiting for first consumer to be created before binding
  Warning  ProvisioningFailed    29m (x13 over 35m)    local.csi.aliyun.com_open-local-controller-56ff5bb455-ftv27_f893c9be-e50b-47ab-8323-a559fd3c3637  failed to provision volume with StorageClass "open-local-lvm-xfs": rpc error: code = Internal desc = CreateVolume: fail to schedule LVM local-7fd51522-6d52-415e-b6f1-bfabd4bbda0a: rpc error: code = InvalidArgument desc = lvm schedule with error Get Response StatusCode 500, Response: failed to allocate local storage for pvc ns-f605cad6af0a/pvc-12a3875af957483986e0-200g: [multipleVGs]not enough lv storage on c1-172-16-0-2/open-local-pool-0, requested size 200Gi,  free size 24572Mi, strategiy spread. you need to expand the vg
  Normal   WaitForPodScheduled   10m (x5 over 30m)     persistentvolume-controller                                                                       waiting for pod 12a3875af957483986e0-76b5bdcc56-5pjfp to be scheduled
  Normal   Provisioning          4m57s (x55 over 35m)  local.csi.aliyun.com_open-local-controller-56ff5bb455-ftv27_f893c9be-e50b-47ab-8323-a559fd3c3637  External provisioner is provisioning volume for claim "ns-f605cad6af0a/pvc-12a3875af957483986e0-200g"
  Normal   ExternalProvisioning  14s (x146 over 35m)   persistentvolume-controller                                                                       waiting for a volume to be created, either by external provisioner "local.csi.aliyun.com" or manually created by system administrator

nls info

apiVersion: csi.aliyun.com/v1alpha1
kind: NodeLocalStorage
metadata:
  creationTimestamp: "2024-12-30T09:49:27Z"
  generation: 1
  name: c1-172-16-0-2
  resourceVersion: "109570975"
  uid: c559673b-60e1-4595-9610-4f699cfe42cf
spec:
  listConfig:
    devices: {}
    mountPoints: {}
    vgs:
      include:
      - open-local-pool-[0-9]+
      - yoda-pool[0-9]+
      - ackdistro-pool
  nodeName: c1-172-16-0-2
  resourceToBeInited:
    vgs:
    - devices:
      - /dev/vdb
      name: open-local-pool-0
  spdkConfig: {}
status:
  filteredStorageInfo:
    updateStatusInfo:
      lastUpdateTime: "2024-12-31T01:17:49Z"
      updateStatus: accepted
    volumeGroups:
    - open-local-pool-0
  nodeStorageInfo:
    deviceInfo:
    - condition: DiskReady
      mediaType: hdd
      name: /dev/sda1
      readOnly: false
      total: 1048576
    - condition: DiskReady
      mediaType: hdd
      name: /dev/sda2
      readOnly: false
      total: 3999996575744
    - condition: DiskReady
      mediaType: hdd
      name: /dev/sda
      readOnly: false
      total: 3999999721472
    - condition: DiskReady
      mediaType: hdd
      name: /dev/sdb
      readOnly: false
      total: 4000787030016
    - condition: DiskReady
      mediaType: hdd
      name: /dev/sdc1
      readOnly: false
      total: 4000776716288
    - condition: DiskReady
      mediaType: hdd
      name: /dev/sdc9
      readOnly: false
      total: 8388608
    - condition: DiskReady
      mediaType: hdd
      name: /dev/sdc
      readOnly: false
      total: 4000787030016
    phase: Running
    state:
      lastHeartbeatTime: "2024-12-31T01:17:49Z"
      status: "True"
      type: DiskReady
    volumeGroups:
    - allocatable: 1099507433472
      available: 455262339072
      condition: DiskReady
      logicalVolumes:
      - condition: DiskReady
        name: local-06c2f31d-36f4-4e20-91ea-afb14ac7b3a1
        total: 214748364800
        vgname: open-local-pool-0
      - condition: DiskReady
        name: local-44f8cb03-cc84-4491-a38a-664605eca051
        total: 214748364800
        vgname: open-local-pool-0
      - condition: DiskReady
        name: local-f8f2ec0f-ce3d-4ebc-9446-e0163455d1e8
        total: 214748364800
        vgname: open-local-pool-0
      name: open-local-pool-0
      physicalVolumes:
      - /dev/zd0
      total: 1099507433472

blk info

zd0  230:0    0    1T  0 disk
├─open--local--pool--0-local--06c2f31d--36f4--4e20--91ea--afb14ac7b3a1
│    253:0    0  200G  0 lvm  /var/lib/kubelet/pods/1258a02d-49af-44eb-9c80-0ea99066bc1f/volumes/kubernetes.io~csi/local-06c2f31d-36f4-4e20-91ea-afb14ac7b3a1/mount
├─open--local--pool--0-local--f8f2ec0f--ce3d--4ebc--9446--e0163455d1e8
│    253:1    0  200G  0 lvm  /var/lib/kubelet/pods/f40bebd1-c118-46e5-a8e4-d091a15488a8/volumes/kubernetes.io~csi/local-f8f2ec0f-ce3d-4ebc-9446-e0163455d1e8/mount
└─open--local--pool--0-local--44f8cb03--cc84--4491--a38a--664605eca051
     253:2    0  200G  0 lvm  /var/lib/kubelet/pods/a44e1878-ac6e-4fd4-abcb-cd5312554718/volumes/kubernetes.io~csi/local-44f8cb03-cc84-4491-a38a-664605eca051/mount

Ⅲ. Describe what you expected to happen

Ⅳ. How to reproduce it (as minimally and precisely as possible)

Ⅴ. Anything else we need to know?

Ⅵ. Environment:

  • Open-Local version:v0.7.1
  • OS (e.g. from /etc/os-release):Ubuntu 22.04.3 LTS
  • Kernel (e.g. uname -a):c1-master-1 5.15.0-78-generic
  • Install tools:
  • Others:
@iveskim
Copy link
Author

iveskim commented Dec 31, 2024

I can create lvm in node

lvcreate -L 200G -n my-lv open-local-pool-0

blk info

zd0  230:0    0    1T  0 disk
├─open--local--pool--0-local--06c2f31d--36f4--4e20--91ea--afb14ac7b3a1
│    253:0    0  200G  0 lvm  /var/lib/kubelet/pods/1258a02d-49af-44eb-9c80-0ea99066bc1f/volumes/kubernetes.io~csi/local-06c2f31d-36f4-4e20-91ea-afb14ac7b3a1/mount
├─open--local--pool--0-local--f8f2ec0f--ce3d--4ebc--9446--e0163455d1e8
│    253:1    0  200G  0 lvm  /var/lib/kubelet/pods/f40bebd1-c118-46e5-a8e4-d091a15488a8/volumes/kubernetes.io~csi/local-f8f2ec0f-ce3d-4ebc-9446-e0163455d1e8/mount
├─open--local--pool--0-local--44f8cb03--cc84--4491--a38a--664605eca051
│    253:2    0  200G  0 lvm  /var/lib/kubelet/pods/a44e1878-ac6e-4fd4-abcb-cd5312554718/volumes/kubernetes.io~csi/local-44f8cb03-cc84-4491-a38a-664605eca051/mount
└─open--local--pool--0-my--lv
     253:3    0  200G  0 lvm

@peter-wangxu
Copy link
Collaborator

can you share the scheduler cache?

curl http://open-local--scheduler-extender:23000/cache?nodeName={node name}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants