-
Notifications
You must be signed in to change notification settings - Fork 275
add support for Ephemeral Inline Volumes support #153
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Hi @boddumanohar. Thanks for your PR. I'm waiting for a kubernetes-csi member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Pull Request Test Coverage Report for Build 548351164
💛 - Coveralls |
tested using the above yaml:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/ok-to-test
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: andyzhangx, boddumanohar The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/test pull-csi-driver-nfs-e2e |
@andyzhangx in the above e2e failure. It tells that the failure is at this line:
this doesn't seem to this is because of any issue in this PR. It looks like its trying to reach to an endpoint and failed. |
/test pull-csi-driver-nfs-e2e |
@@ -7,4 +7,5 @@ spec: | |||
attachRequired: false | |||
volumeLifecycleModes: | |||
- Persistent | |||
- Ephemeral |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pls add this config into https://github.com/kubernetes-csi/csi-driver-nfs/blob/master/charts/latest/csi-driver-nfs/templates/csi-nfs-driverinfo.yaml, and then update tgz
file by helm package charts/latest/csi-driver-nfs -d charts/latest
, thanks.
4fc7a08
to
3cacbb2
Compare
@boddumanohar: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
even after updating the helm package, its still failing. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would suggest instead of supporting this feature, we support the generic ephemeral volumes feature instead. kubernetes/enhancements#1698
It has better security and feature capabilities since it still uses PVCs. Ephemeral inline volumes wasn't really designed for general purpose storage solutions.
@msau42 I see comments in the above link |
The ephemeral inline volumes feature requires that your driver make changes in NodePublishVolume to process special fields and potentially implement steps like provision/delete inside the NodePublish call. It also does not have a good security model because it would let users specify any NFS server they like, instead of letting the admin control the servers through PV and StorageClass. I would rather wait for generic ephemeral volumes to go beta in Kubernetes 1.21 and not have to support this special mode along with its weaker security model. |
@msau42 I am thinking we only implement static provisioning(use existing nfs share), that's a parity feature for in-tree nfs driver, we don't need to provision/delete inside the NodePublish call since it's just static provisioning, it looks like only config change like following:
I agree provision/delete inside the NodePublish call is not a good idea, it could introduce potential security issue, while we still want to use static provisioning for ephemeral inline volumes feature, is that ok? |
Even without dynamic provisioning, there isn't a way for admins to control which users get access to which nfs servers. Users can specify any nfs server to connect to in the Pod spec, whereas with the PVC model, it is controlled by the PV object which needs admin privileges. |
@msau42 I think that depends on users, if they use |
For example/demo purposes, is there a challenge to providing a manifest that includes pod, pvc and pv? |
no, while inline volume is quite straightforward, I think that's also a question for other CSI driver whether to support |
@boddumanohar: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
We had a discussion about the future of this in sig-storage, and concluded that this feature is more tailored towards specialized CSI drivers (not general purpose storage) such as secrets-store, cert-manager, etc, which has a different set of requirements. We updated our documentation to reflect some of those reasons, although I do notice now that security is missing, which I think is also is very important to note the difference. @mattcary would you be able to update the page? |
Yes I'll update the documentation. FWIW I agree w/ @msau42 over the security problems. It seems weird to use use ephemeral storage on a persistent, shared volume It definitely seems odd to use ephemeral storage to use a persistent, shared volume---I'm not sure I see a use case for it. |
closing this in favor of |
Well, there is a use case for CSI inline volume support: in Azure File CSI driver, we leverage (k8s secret, pod.Namespace) to mount azure file, and secret is a namespaced isolation object, so different pods in different namespace could mount inline volume with same secret name while in different namespace.
|
cc @Jiawei0227 |
Thanks for this use case! The issue of storageclasses being cluster objects recently came up in a different context; the customer was working around it by using resource quotas, but that's awkward. Using CSI inline volumes to provide namespace-scoped storage is an interesting idea and maybe it's useful more generally? |
Regarding the secrets issue. Here's the use cases as I understand:
Each of these use cases can be achieved with one StorageClass. Does this work for you? |
@msau42 thanks for the sharing, that's quite useful. There is another customer scenario: storage class is managed by admin, in some companies, end user does not have permission to set up storage class, they can only set up pod, secret in their namespace, so inline volume functionality is still useful in that scenario. |
Would the template storage class not work in this case? The cluster admin creates the storage class, which uses the template to refer to the secret info in the end-user created PVC. |
@mattcary you mean following config? where is the secret specified? Admin is not aware of secret, and with inline volume support, user could create secret & set secret in pod spec by themselves. kind: Pod
apiVersion: v1
metadata:
name: some-pod
spec:
containers:
...
volumes:
- name: scratch-volume
ephemeral:
volumeClaimTemplate:
metadata:
labels:
type: my-frontend-volume
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "scratch-storage-class"
resources:
requests:
storage: 1Gi
volumes:
- name: persistent-storage
csi:
driver: file.csi.azure.com
volumeAttributes:
shareName: EXISTING_SHARE_NAME
secretName: azure-secret |
See the link Michelle provided: https://kubernetes-csi.github.io/docs/secrets-and-credentials-storage-class.html#per-volume-secrets. It plumbs fields from the PVC:
|
This is intentional. Admin can use quota to allow certain namespacpaces to use certain storage classes. Usually with a custom admission plugin / webhook that sets up the quota on namespace creation, based on some policy. If you think that this is not enough, please suggest a Kubernetes wide enhancement. Using ephemeral inline volumes breaks pod portability (it hardcodes storage details into the pods) and it needs explicit support in every CSI driver. |
We have some nfs servers we currently mount using the built in pod inline nfs support that we control carefully via validating webhook and psp's. We mount shared nfs and ensure the pods uid/gid/sudo etc permissions are all properly aligned to allow the mount to be performed safely, and restrict it to a set of blessed servers. So, ephemeral nfs mounts are possible to be safe and useful to folks especially in the research community. |
build.make: fix image publishng
What type of PR is this?
/kind feature
What this PR does / why we need it:
this PR adds support for adds Ephemeral Inline Volumes.
Which issue(s) this PR fixes:
Fixes #148
Special notes for your reviewer:
Does this PR introduce a user-facing change?:
NOTE: Minimum required Kubernetes version is 1.16