-
Notifications
You must be signed in to change notification settings - Fork 275
Description
What happened:
On creating a PVC referring to a storageclass for automatic volume provisioning, the PVC goes into a "Pending" state. On describing the PVC, the below errors can be seen:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 12s (x4 over 44s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "nfs.csi.k8s.io" or manually created by system administrator
Warning ProvisioningFailed 10s (x3 over 34s) nfs.csi.k8s.io_<node name>_57695e32-d5e0-4cda-ab84-16bcc6b0d1cf failed to provision volume with StorageClass "new-test": rpc error: code = Internal desc = failed to mount nfs server: rpc error: code = Internal desc = mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs <NFS Server IP>:<path>/new-test /tmp/pvc-cedac322-feaf-40ba-9432-028c2727dbce
Output: /usr/sbin/start-statd: 10: cannot create /run/rpc.statd.lock: Read-only file system
mount.nfs: rpc.statd is not running but is required for remote locking.
mount.nfs: Either use '-o nolock' to keep locks local, or start statd.
mount.nfs: mounting <NFS Server IP>:<path>/new-test failed, reason given by server: No such file or directory
Normal Provisioning 6s (x4 over 44s) nfs.csi.k8s.io_<node name>_57695e32-d5e0-4cda-ab84-16bcc6b0d1cf External provisioner is provisioning volume for claim "default/new-test"
As per the error, the mount process tries to create a lock file in the root filesystem (/run/rpc.statd.lock) and since the root filesystem is read-only.
The below errors can be seen in the nfs container logs as well:
- Container name: nfs, part of "csi-nfs-controller" deployment:
Mounting command: mount
Mounting arguments: -t nfs <NFS Server IP>:<path>/new-test /tmp/pvc-cedac322-feaf-40ba-9432-028c2727dbce
Output: /usr/sbin/start-statd: 10: cannot create /run/rpc.statd.lock: Read-only file system
mount.nfs: rpc.statd is not running but is required for remote locking.
mount.nfs: Either use '-o nolock' to keep locks local, or start statd.
mount.nfs: mounting <NFS Server IP>:<path>/new-test failed, reason given by server: No such file or directory
- Container name: nfs, part of "csi-nfs-node" daemonset:
I0521 17:45:58.526481 1 mount_linux.go:274] Cannot create temp dir to detect safe 'not mounted' behavior: mkdir /tmp/kubelet-detect-safe-umount959869345: read-only file system
What you expected to happen:
The PV should be provisioned without issues and the PVC should be bound. There should not be any mounting issues.
How to reproduce it:
Already described above.
Anything else we need to know?:
To fix this, I had to do the below changes on the templates for the csi-nfs-controller and the csi-nfs-node daemonset:
- templates/csi-nfs-controller.yaml
For the "nfs" container, updated "readOnlyRootFilesystem: true" to "readOnlyRootFilesystem: false" under "securityContext".
- templates/csi-nfs-node.yaml
For the "nfs" container, updated "readOnlyRootFilesystem: true" to "readOnlyRootFilesystem: false" under "securityContext".
Environment:
- CSI Driver version: v4.7.0
- Kubernetes version (use
kubectl version
): v1.27.11 - OS (e.g. from /etc/os-release): Ubuntu 20.04.3 LTS
- Kernel (e.g.
uname -a
): 5.4.0-77-generic - Install tools: NA
- Others: NA