Skip to content

chore: update kube dependencies to v1.24.0 #332

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

humblec
Copy link
Contributor

@humblec humblec commented May 6, 2022

Signed-off-by: Humble Chirammal [email protected]

/kind cleanup

-->

Kube dependencies are updated to v1.24.0. 

@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels May 6, 2022
@k8s-ci-robot k8s-ci-robot added the size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. label May 6, 2022
@coveralls
Copy link

coveralls commented May 6, 2022

Pull Request Test Coverage Report for Build 2468367597

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage remained the same at 76.575%

Totals Coverage Status
Change from base Build 2462377990: 0.0%
Covered Lines: 559
Relevant Lines: 730

💛 - Coveralls

@humblec
Copy link
Contributor Author

humblec commented May 6, 2022

/test pull-csi-driver-nfs-e2e

@andyzhangx
Copy link
Member

seems the e2e tests need to be changed with this k8s lib changes:

 error waiting for deployment "nfs-volume-tester-fr9zl" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.May, 6, 15, 13, 14, 0, time.Local), LastTransitionTime:time.Date(2022, time.May, 6, 15, 13, 14, 0, time.Local), Reason:"NewReplicaSetCreated", Message:"Created new replica set \"nfs-volume-tester-fr9zl-947674b7\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.May, 6, 15, 13, 14, 0, time.Local), LastTransitionTime:time.Date(2022, time.May, 6, 15, 13, 14, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"ReplicaFailure", Status:"True", LastUpdateTime:time.Date(2022, time.May, 6, 15, 13, 14, 0, time.Local), LastTransitionTime:time.Date(2022, time.May, 6, 15, 13, 14, 0, time.Local), Reason:"FailedCreate", Message:"pods \"nfs-volume-tester-fr9zl-947674b7-kl59j\" is forbidden: violates PodSecurity \"restricted:latest\": allowPrivilegeEscalation != false (container \"volume-tester\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container \"volume-tester\" must set securityContext.capabilities.drop=[\"ALL\"]), runAsNonRoot != true (pod or container \"volume-tester\" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container \"volume-tester\" must set securityContext.seccompProfile.type to \"RuntimeDefault\" or \"Localhost\")"}}, CollisionCount:(*int32)(nil)}

@humblec
Copy link
Contributor Author

humblec commented May 9, 2022

seems the e2e tests need to be changed with this k8s lib changes:

 error waiting for deployment "nfs-volume-tester-fr9zl" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.May, 6, 15, 13, 14, 0, time.Local), LastTransitionTime:time.Date(2022, time.May, 6, 15, 13, 14, 0, time.Local), Reason:"NewReplicaSetCreated", Message:"Created new replica set \"nfs-volume-tester-fr9zl-947674b7\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.May, 6, 15, 13, 14, 0, time.Local), LastTransitionTime:time.Date(2022, time.May, 6, 15, 13, 14, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"ReplicaFailure", Status:"True", LastUpdateTime:time.Date(2022, time.May, 6, 15, 13, 14, 0, time.Local), LastTransitionTime:time.Date(2022, time.May, 6, 15, 13, 14, 0, time.Local), Reason:"FailedCreate", Message:"pods \"nfs-volume-tester-fr9zl-947674b7-kl59j\" is forbidden: violates PodSecurity \"restricted:latest\": allowPrivilegeEscalation != false (container \"volume-tester\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container \"volume-tester\" must set securityContext.capabilities.drop=[\"ALL\"]), runAsNonRoot != true (pod or container \"volume-tester\" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container \"volume-tester\" must set securityContext.seccompProfile.type to \"RuntimeDefault\" or \"Localhost\")"}}, CollisionCount:(*int32)(nil)}

sure, will correct that.. its on PS feature gate afacit.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label May 17, 2022
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label May 31, 2022
@humblec
Copy link
Contributor Author

humblec commented May 31, 2022

so it looks like for e2e to pass, either the PS feature gate has to be false or we need to annotate the e2e namespace to have labels .

kubectl label --overwrite ns <E2E namespace >\
  pod-security.kubernetes.io/enforce=privileged \
  pod-security.kubernetes.io/warn=baseline \
  pod-security.kubernetes.io/audit=baseline

@andyzhangx any thoughts here?

@humblec humblec force-pushed the kube-1.24 branch 4 times, most recently from 7a35140 to 2fbb0fe Compare June 9, 2022 12:35
@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Aug 5, 2022
@humblec humblec force-pushed the kube-1.24 branch 4 times, most recently from df82a7a to 2234c95 Compare September 22, 2022 15:34
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Sep 22, 2022
Signed-off-by: Humble Chirammal <[email protected]>
@k8s-ci-robot
Copy link
Contributor

@humblec: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-csi-driver-nfs-unit 0fc5f2b link true /test pull-csi-driver-nfs-unit
pull-csi-driver-nfs-integration 0fc5f2b link true /test pull-csi-driver-nfs-integration
pull-csi-driver-nfs-sanity 0fc5f2b link true /test pull-csi-driver-nfs-sanity
pull-kubernetes-csi-csi-driver-nfs 0fc5f2b link true /test pull-kubernetes-csi-csi-driver-nfs
pull-csi-driver-nfs-external-e2e 0fc5f2b link true /test pull-csi-driver-nfs-external-e2e
pull-csi-driver-nfs-e2e 0fc5f2b link true /test pull-csi-driver-nfs-e2e

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Oct 18, 2022
@k8s-ci-robot
Copy link
Contributor

@humblec: PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Copy link
Contributor

@ggriffiths ggriffiths left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like this needs a rebase and some fixed tests. Feel free to re-request review when it's ready

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: humblec
Once this PR has been reviewed and has the lgtm label, please assign saad-ali for approval by writing /assign @saad-ali in a comment. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 22, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle rotten
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

1 similar comment
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle rotten
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 21, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closed this PR.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants