2 - Kubernetes Release Cycle
Warning: This content is auto-generated and links may not function. The source of the document is located
here.
Targeting enhancements, Issues and PRs to Release Milestones
This document is focused on Kubernetes developers and contributors who need to
create an enhancement, issue, or pull request which targets a specific release
milestone.
The process for shepherding enhancements, issues, and pull requests into a
Kubernetes release spans multiple stakeholders:
- the enhancement, issue, and pull request owner(s)
- SIG leadership
- the Release Team
Information on workflows and interactions are described below.
As the owner of an enhancement, issue, or pull request (PR), it is your
responsibility to ensure release milestone requirements are met. Automation and
the Release Team will be in contact with you if updates are required, but
inaction can result in your work being removed from the milestone. Additional
requirements exist when the target milestone is a prior release (see
cherry pick process for more information).
TL;DR
If you want your PR to get merged, it needs the following required labels and
milestones, represented here by the Prow /commands it would take to add them:
Normal Dev (Weeks 1-8)
- /sig {name}
- /kind {type}
- /lgtm
- /approved
- /milestone {v1.y}
- /sig {name}
- /kind {bug, failing-test}
- /lgtm
- /approved
Post-Release (Weeks 11+)
Return to 'Normal Dev' phase requirements:
- /sig {name}
- /kind {type}
- /lgtm
- /approved
Merges into the 1.y branch are now via cherry picks, approved
by Release Managers.
In the past, there was a requirement for a milestone-targeted pull requests to
have an associated GitHub issue opened, but this is no longer the case.
Features or enhancements are effectively GitHub issues or KEPs which
lead to subsequent PRs.
The general labeling process should be consistent across artifact types.
Definitions
-
issue owners: Creator, assignees, and user who moved the issue into a
release milestone
-
Release Team: Each Kubernetes release has a team doing project management
tasks described here.
The contact info for the team associated with any given release can be found
here.
-
Y days: Refers to business days
-
enhancement: see "Is My Thing an Enhancement?"
-
Enhancements Freeze:
the deadline by which KEPs have to be completed in order for
enhancements to be part of the current release
-
Exception Request:
The process of requesting an extension on the deadline for a particular
Enhancement
-
Code Freeze:
The period of ~4 weeks before the final release date, during which only
critical bug fixes are merged into the release.
-
Pruning:
The process of removing an Enhancement from a release milestone if it is not
fully implemented or is otherwise considered not stable.
-
release milestone: semantic version string or
GitHub milestone
referring to a release MAJOR.MINOR vX.Y
version.
See also
release versioning.
-
release branch: Git branch release-X.Y
created for the vX.Y
milestone.
Created at the time of the vX.Y-rc.0
release and maintained after the
release for approximately 12 months with vX.Y.Z
patch releases.
Note: releases 1.19 and newer receive 1 year of patch release support, and
releases 1.18 and earlier received 9 months of patch release support.
The Release Cycle
Kubernetes releases currently happen approximately four times per year.
The release process can be thought of as having three main phases:
- Enhancement Definition
- Implementation
- Stabilization
But in reality, this is an open source and agile project, with feature planning
and implementation happening at all times. Given the project scale and globally
distributed developer base, it is critical to project velocity to not rely on a
trailing stabilization phase and rather have continuous integration testing
which ensures the project is always stable so that individual commits can be
flagged as having broken something.
With ongoing feature definition through the year, some set of items will bubble
up as targeting a given release. Enhancements Freeze
starts ~4 weeks into release cycle. By this point all intended feature work for
the given release has been defined in suitable planning artifacts in
conjunction with the Release Team's Enhancements Lead.
After Enhancements Freeze, tracking milestones on PRs and issues is important.
Items within the milestone are used as a punchdown list to complete the
release. On issues, milestones must be applied correctly, via triage by the
SIG, so that Release Team can track bugs and enhancements (any
enhancement-related issue needs a milestone).
There is some automation in place to help automatically assign milestones to
PRs.
This automation currently applies to the following repos:
kubernetes/enhancements
kubernetes/kubernetes
kubernetes/release
kubernetes/sig-release
kubernetes/test-infra
At creation time, PRs against the master
branch need humans to hint at which
milestone they might want the PR to target. Once merged, PRs against the
master
branch have milestones auto-applied so from that time onward human
management of that PR's milestone is less necessary. On PRs against release
branches, milestones are auto-applied when the PR is created so no human
management of the milestone is ever necessary.
Any other effort that should be tracked by the Release Team that doesn't fall
under that automation umbrella should be have a milestone applied.
Implementation and bug fixing is ongoing across the cycle, but culminates in a
code freeze period.
Code Freeze starts in week ~10 and continues for ~2 weeks.
Only critical bug fixes are accepted into the release codebase during this
time.
There are approximately two weeks following Code Freeze, and preceding release,
during which all remaining critical issues must be resolved before release.
This also gives time for documentation finalization.
When the code base is sufficiently stable, the master branch re-opens for
general development and work begins there for the next release milestone. Any
remaining modifications for the current release are cherry picked from master
back to the release branch. The release is built from the release branch.
Each release is part of a broader Kubernetes lifecycle:
Removal Of Items From The Milestone
Before getting too far into the process for adding an item to the milestone,
please note:
Members of the Release Team may remove issues from the
milestone if they or the responsible SIG determine that the issue is not
actually blocking the release and is unlikely to be resolved in a timely
fashion.
Members of the Release Team may remove PRs from the milestone for any of the
following, or similar, reasons:
- PR is potentially de-stabilizing and is not needed to resolve a blocking
issue
- PR is a new, late feature PR and has not gone through the enhancements
process or the exception process
- There is no responsible SIG willing to take ownership of the PR and resolve
any follow-up issues with it
- PR is not correctly labelled
- Work has visibly halted on the PR and delivery dates are uncertain or late
While members of the Release Team will help with labelling and contacting
SIG(s), it is the responsibility of the submitter to categorize PRs, and to
secure support from the relevant SIG to guarantee that any breakage caused by
the PR will be rapidly resolved.
Where additional action is required, an attempt at human to human escalation
will be made by the Release Team through the following channels:
- Comment in GitHub mentioning the SIG team and SIG members as appropriate for
the issue type
- Emailing the SIG mailing list
- bootstrapped with group email addresses from the
community sig list
- optionally also directly addressing SIG leadership or other SIG members
- Messaging the SIG's Slack channel
- bootstrapped with the slackchannel and SIG leadership from the
community sig list
- optionally directly "@" mentioning SIG leadership or others by handle
Adding An Item To The Milestone
Milestone Maintainers
The members of the milestone-maintainers
GitHub team are entrusted with the responsibility of specifying the release
milestone on GitHub artifacts.
This group is maintained
by SIG Release and has representation from the various SIGs' leadership.
Feature additions
Feature planning and definition takes many forms today, but a typical example
might be a large piece of work described in a KEP, with associated task
issues in GitHub. When the plan has reached an implementable state and work is
underway, the enhancement or parts thereof are targeted for an upcoming milestone
by creating GitHub issues and marking them with the Prow "/milestone" command.
For the first ~4 weeks into the release cycle, the Release Team's Enhancements
Lead will interact with SIGs and feature owners via GitHub, Slack, and SIG
meetings to capture all required planning artifacts.
If you have an enhancement to target for an upcoming release milestone, begin a
conversation with your SIG leadership and with that release's Enhancements
Lead.
Issue additions
Issues are marked as targeting a milestone via the Prow "/milestone" command.
The Release Team's Bug Triage Lead
and overall community watch incoming issues and triage them, as described in
the contributor guide section on
issue triage.
Marking issues with the milestone provides the community better visibility
regarding when an issue was observed and by when the community feels it must be
resolved. During Code Freeze, a milestone must be set to merge
a PR.
An open issue is no longer required for a PR, but open issues and associated
PRs should have synchronized labels. For example a high priority bug issue
might not have its associated PR merged if the PR is only marked as lower
priority.
PR Additions
PRs are marked as targeting a milestone via the Prow "/milestone" command.
This is a blocking requirement during Code Freeze as described above.
Other Required Labels
Here is the list of labels and their use and purpose.
SIG Owner Label
The SIG owner label defines the SIG to which we escalate if a milestone issue
is languishing or needs additional attention. If there are no updates after
escalation, the issue may be automatically removed from the milestone.
These are added with the Prow "/sig" command. For example to add the label
indicating SIG Storage is responsible, comment with /sig storage
.
Priority Label
Priority labels are used to determine an escalation path before moving issues
out of the release milestone. They are also used to determine whether or not a
release should be blocked on the resolution of the issue.
priority/critical-urgent
: Never automatically move out of a release
milestone; continually escalate to contributor and SIG through all available
channels.
- considered a release blocking issue
- requires daily updates from issue owners during Code Freeze
- would require a patch release if left undiscovered until after the minor
release
priority/important-soon
: Escalate to the issue owners and SIG owner; move
out of milestone after several unsuccessful escalation attempts.
- not considered a release blocking issue
- would not require a patch release
- will automatically be moved out of the release milestone at Code Freeze
after a 4 day grace period
priority/important-longterm
: Escalate to the issue owners; move out of the
milestone after 1 attempt.
- even less urgent / critical than
priority/important-soon
- moved out of milestone more aggressively than
priority/important-soon
Issue/PR Kind Label
The issue kind is used to help identify the types of changes going into the
release over time. This may allow the Release Team to develop a better
understanding of what sorts of issues we would miss with a faster release
cadence.
For release targeted issues, including pull requests, one of the following
issue kind labels must be set:
kind/api-change
: Adds, removes, or changes an API
kind/bug
: Fixes a newly discovered bug.
kind/cleanup
: Adding tests, refactoring, fixing old bugs.
kind/design
: Related to design
kind/documentation
: Adds documentation
kind/failing-test
: CI test case is failing consistently.
kind/feature
: New functionality.
kind/flake
: CI test case is showing intermittent failures.
3 - Patch Releases
Schedule and team contact information for Kubernetes patch releases.
For general information about Kubernetes release cycle, see the
release process description.
Cadence
Our typical patch release cadence is monthly. It is
commonly a bit faster (1 to 2 weeks) for the earliest patch releases
after a 1.X minor release. Critical bug fixes may cause a more
immediate release outside of the normal cadence. We also aim to not make
releases during major holiday periods.
See the Release Managers page for full contact details on the Patch Release Team.
Please give us a business day to respond - we may be in a different timezone!
In between releases the team is looking at incoming cherry pick
requests on a weekly basis. The team will get in touch with
submitters via GitHub PR, SIG channels in Slack, and direct messages
in Slack and email
if there are questions on the PR.
Cherry picks
Please follow the cherry pick process.
Cherry picks must be merge-ready in GitHub with proper labels (e.g.,
approved
, lgtm
, release-note
) and passing CI tests ahead of the
cherry pick deadline. This is typically two days before the target
release, but may be more. Earlier PR readiness is better, as we
need time to get CI signal after merging your cherry picks ahead
of the actual release.
Cherry pick PRs which miss merge criteria will be carried over and tracked
for the next patch release.
Support Period
In accordance with the yearly support KEP, the Kubernetes
Community will support active patch release series for a period of roughly
fourteen (14) months.
The first twelve months of this timeframe will be considered the standard
period.
Towards the end of the twelve month, the following will happen:
- Release Managers will cut a release
- The patch release series will enter maintenance mode
During the two-month maintenance mode period, Release Managers may cut
additional maintenance releases to resolve:
- CVEs (under the advisement of the Security Response Committee)
- dependency issues (including base image updates)
- critical core component issues
At the end of the two-month maintenance mode period, the patch release series
will be considered EOL (end of life) and cherry picks to the associated branch
are to be closed soon afterwards.
Note that the 28th of the month was chosen for maintenance mode and EOL target
dates for simplicity (every month has it).
Upcoming Monthly Releases
Timelines may vary with the severity of bug fixes, but for easier planning we
will target the following monthly release points. Unplanned, critical
releases may also occur in between these.
Monthly Patch Release |
Cherry Pick Deadline |
Target date |
December 2021 |
2021-12-10 |
2021-12-15 |
January 2022 |
2021-01-14 |
2021-01-19 |
February 2022 |
2021-02-11 |
2021-02-16 |
March 2022 |
2021-03-11 |
2021-03-16 |
Detailed Release History for Active Branches
1.22
1.22 enters maintenance mode on 2022-08-28
End of Life for 1.22 is 2022-10-28
PATCH RELEASE |
CHERRY PICK DEADLINE |
TARGET DATE |
NOTE |
1.22.5 |
2021-12-10 |
2021-12-15 |
|
1.22.4 |
2021-11-12 |
2021-11-17 |
|
1.22.3 |
2021-10-22 |
2021-10-27 |
|
1.22.2 |
2021-09-10 |
2021-09-15 |
|
1.22.1 |
2021-08-16 |
2021-08-19 |
|
1.21
1.21 enters maintenance mode on 2022-04-28
End of Life for 1.21 is 2022-06-28
PATCH RELEASE |
CHERRY PICK DEADLINE |
TARGET DATE |
NOTE |
1.21.8 |
2021-12-10 |
2021-12-15 |
|
1.21.7 |
2021-11-12 |
2021-11-17 |
|
1.21.6 |
2021-10-22 |
2021-10-27 |
|
1.21.5 |
2021-09-10 |
2021-09-15 |
|
1.21.4 |
2021-08-07 |
2021-08-11 |
|
1.21.3 |
2021-07-10 |
2021-07-14 |
|
1.21.2 |
2021-06-12 |
2021-06-16 |
|
1.21.1 |
2021-05-07 |
2021-05-12 |
Regression |
1.20
1.20 enters maintenance mode on 2021-12-28
End of Life for 1.20 is 2022-02-28
PATCH RELEASE |
CHERRY PICK DEADLINE |
TARGET DATE |
NOTE |
1.20.14 |
2021-12-10 |
2021-12-15 |
|
1.20.13 |
2021-11-12 |
2021-11-17 |
|
1.20.12 |
2021-10-22 |
2021-10-27 |
|
1.20.11 |
2021-09-10 |
2021-09-15 |
|
1.20.10 |
2021-08-07 |
2021-08-11 |
|
1.20.9 |
2021-07-10 |
2021-07-14 |
|
1.20.8 |
2021-06-12 |
2021-06-16 |
|
1.20.7 |
2021-05-07 |
2021-05-12 |
Regression |
1.20.6 |
2021-04-09 |
2021-04-14 |
|
1.20.5 |
2021-03-12 |
2021-03-17 |
|
1.20.4 |
2021-02-12 |
2021-02-18 |
|
1.20.3 |
2021-02-12 |
2021-02-17 |
Conformance Tests Issue |
1.20.2 |
2021-01-08 |
2021-01-13 |
|
1.20.1 |
2020-12-11 |
2020-12-18 |
Tagging Issue |
Non-Active Branch History
These releases are no longer supported.
MINOR VERSION |
FINAL PATCH RELEASE |
EOL DATE |
NOTE |
1.19 |
1.19.16 |
2021-10-28 |
|
1.18 |
1.18.20 |
2021-06-18 |
Created to resolve regression introduced in 1.18.19 |
1.18 |
1.18.19 |
2021-05-12 |
Regression |
1.17 |
1.17.17 |
2021-01-13 |
|
1.16 |
1.16.15 |
2020-09-02 |
|
1.15 |
1.15.12 |
2020-05-06 |
|
1.14 |
1.14.10 |
2019-12-11 |
|
1.13 |
1.13.12 |
2019-10-15 |
|
1.12 |
1.12.10 |
2019-07-08 |
|
1.11 |
1.11.10 |
2019-05-01 |
|
1.10 |
1.10.13 |
2019-02-13 |
|
1.9 |
1.9.11 |
2018-09-29 |
|
1.8 |
1.8.15 |
2018-07-12 |
|
1.7 |
1.7.16 |
2018-04-04 |
|
1.6 |
1.6.13 |
2017-11-23 |
|
1.5 |
1.5.8 |
2017-10-01 |
|
1.4 |
1.4.12 |
2017-04-21 |
|
1.3 |
1.3.10 |
2016-11-01 |
|
1.2 |
1.2.7 |
2016-10-23 |
|
4 - Release Managers
"Release Managers" is an umbrella term that encompasses the set of Kubernetes
contributors responsible for maintaining release branches, tagging releases,
and building/packaging Kubernetes.
The responsibilities of each role are described below.
Security Embargo Policy
Some information about releases is subject to embargo and we have defined policy about how those embargoes are set. Please refer to the Security Embargo Policy for more information.
Handbooks
NOTE: The Patch Release Team and Branch Manager handbooks will be de-duplicated at a later date.
Release Managers
Note: The documentation might refer to the Patch Release Team and the
Branch Management role. Those two roles were consolidated into the
Release Managers role.
Minimum requirements for Release Managers and Release Manager Associates are:
- Familiarity with basic Unix commands and able to debug shell scripts.
- Familiarity with branched source code workflows via
git
and associated
git
command line invocations.
- General knowledge of Google Cloud (Cloud Build and Cloud Storage).
- Open to seeking help and communicating clearly.
- Kubernetes Community membership
Release Managers are responsible for:
- Coordinating and cutting Kubernetes releases:
- Maintaining the release branches:
- Reviewing cherry picks
- Ensuring the release branch stays healthy and that no unintended patch
gets merged
- Mentoring the Release Manager Associates group
- Actively developing features and maintaining the code in k/release
- Supporting Release Manager Associates and contributors through actively
participating in the Buddy program
- Check in monthly with Associates and delegate tasks, empower them to cut
releases, and mentor
- Being available to support Associates in onboarding new contributors e.g.,
answering questions and suggesting appropriate work for them to do
This team at times works in close conjunction with the
Security Response Committee and therefore should abide by the guidelines
set forth in the Security Release Process.
GitHub Access Controls: @kubernetes/release-managers
GitHub Mentions: @kubernetes/release-engineering
Becoming a Release Manager
To become a Release Manager, one must first serve as a Release Manager
Associate. Associates graduate to Release Manager by actively working on
releases over several cycles and:
- demonstrating the willingness to lead
- tag-teaming with Release Managers on patches, to eventually cut a release
independently
- because releases have a limiting function, we also consider substantial
contributions to image promotion and other core Release Engineering tasks
- questioning how Associates work, suggesting improvements, gathering feedback,
and driving change
- being reliable and responsive
- leaning into advanced work that requires Release Manager-level access and
privileges to complete
Release Manager Associates
Release Manager Associates are apprentices to the Release Managers, formerly
referred to as Release Manager shadows. They are responsible for:
- Patch release work, cherry pick review
- Contributing to k/release: updating dependencies and getting used to the
source codebase
- Contributing to the documentation: maintaining the handbooks, ensuring that
release processes are documented
- With help from a release manager: working with the Release Team during the
release cycle and cutting Kubernetes releases
- Seeking opportunities to help with prioritization and communication
- Sending out pre-announcements and updates about patch releases
- Updating the calendar, helping with the release dates and milestones from
the release cycle timeline
- Through the Buddy program, onboarding new contributors and pairing up with
them on tasks
GitHub Mentions: @kubernetes/release-engineering
Becoming a Release Manager Associate
Contributors can become Associates by demonstrating the following:
- consistent participation, including 6-12 months of active release
engineering-related work
- experience fulfilling a technical lead role on the Release Team during a
release cycle
- this experience provides a solid baseline for understanding how SIG Release
works overall—including our expectations regarding technical skills,
communications/responsiveness, and reliability
- working on k/release items that improve our interactions with Testgrid,
cleaning up libraries, etc.
- these efforts require interacting and pairing with Release Managers and
Associates
Build Admins
Build Admins are (currently) Google employees with the requisite access to
Google build systems/tooling to publish deb/rpm packages on behalf of the
Kubernetes project. They are responsible for:
- Building, signing, and publishing the deb/rpm packages
- Being the interlock with Release Managers (and Associates) on the final steps
of each minor (1.Y) and patch (1.Y.Z) release
GitHub team: @kubernetes/build-admins
SIG Release Leads
SIG Release Chairs and Technical Leads are responsible for:
- The governance of SIG Release
- Leading knowledge exchange sessions for Release Managers and Associates
- Coaching on leadership and prioritization
They are mentioned explicitly here as they are owners of the various
communications channels and permissions groups (GitHub teams, GCP access) for
each role. As such, they are highly privileged community members and privy to
some private communications, which can at times relate to Kubernetes security
disclosures.
GitHub team: @kubernetes/sig-release-leads
Chairs
Technical Leads
Past Branch Managers, can be found in the releases directory
of the kubernetes/sig-release repository within release-x.y/release_team.md
.
Example: 1.15 Release Team
6 - Version Skew Policy
The maximum version skew supported between various Kubernetes components.
This document describes the maximum version skew supported between various Kubernetes components.
Specific cluster deployment tools may place additional restrictions on version skew.
Supported versions
Kubernetes versions are expressed as x.y.z, where x is the major version, y is the minor version, and z is the patch version, following Semantic Versioning terminology.
For more information, see Kubernetes Release Versioning.
The Kubernetes project maintains release branches for the most recent three minor releases (1.23, 1.22, 1.21). Kubernetes 1.19 and newer receive approximately 1 year of patch support. Kubernetes 1.18 and older received approximately 9 months of patch support.
Applicable fixes, including security fixes, may be backported to those three release branches, depending on severity and feasibility.
Patch releases are cut from those branches at a regular cadence, plus additional urgent releases, when required.
The Release Managers group owns this decision.
For more information, see the Kubernetes patch releases page.
Supported version skew
kube-apiserver
In highly-available (HA) clusters, the newest and oldest kube-apiserver
instances must be within one minor version.
Example:
- newest
kube-apiserver
is at 1.23
- other
kube-apiserver
instances are supported at 1.23 and 1.22
kubelet
kubelet
must not be newer than kube-apiserver
, and may be up to two minor versions older.
Example:
kube-apiserver
is at 1.23
kubelet
is supported at 1.23, 1.22, and 1.21
Note: If version skew exists between kube-apiserver
instances in an HA cluster, this narrows the allowed kubelet
versions.
Example:
kube-apiserver
instances are at 1.23 and 1.22
kubelet
is supported at 1.22, and 1.21 (1.23 is not supported because that would be newer than the kube-apiserver
instance at version 1.22)
kube-controller-manager, kube-scheduler, and cloud-controller-manager
kube-controller-manager
, kube-scheduler
, and cloud-controller-manager
must not be newer than the kube-apiserver
instances they communicate with. They are expected to match the kube-apiserver
minor version, but may be up to one minor version older (to allow live upgrades).
Example:
kube-apiserver
is at 1.23
kube-controller-manager
, kube-scheduler
, and cloud-controller-manager
are supported at 1.23 and 1.22
Note: If version skew exists between kube-apiserver
instances in an HA cluster, and these components can communicate with any kube-apiserver
instance in the cluster (for example, via a load balancer), this narrows the allowed versions of these components.
Example:
kube-apiserver
instances are at 1.23 and 1.22
kube-controller-manager
, kube-scheduler
, and cloud-controller-manager
communicate with a load balancer that can route to any kube-apiserver
instance
kube-controller-manager
, kube-scheduler
, and cloud-controller-manager
are supported at 1.22 (1.23 is not supported because that would be newer than the kube-apiserver
instance at version 1.22)
kubectl
kubectl
is supported within one minor version (older or newer) of kube-apiserver
.
Example:
kube-apiserver
is at 1.23
kubectl
is supported at 1.24, 1.23, and 1.22
Note: If version skew exists between kube-apiserver
instances in an HA cluster, this narrows the supported kubectl
versions.
Example:
kube-apiserver
instances are at 1.23 and 1.22
kubectl
is supported at 1.23 and 1.22 (other versions would be more than one minor version skewed from one of the kube-apiserver
components)
Supported component upgrade order
The supported version skew between components has implications on the order in which components must be upgraded.
This section describes the order in which components must be upgraded to transition an existing cluster from version 1.22 to version 1.23.
kube-apiserver
Pre-requisites:
- In a single-instance cluster, the existing
kube-apiserver
instance is 1.22
- In an HA cluster, all
kube-apiserver
instances are at 1.22 or 1.23 (this ensures maximum skew of 1 minor version between the oldest and newest kube-apiserver
instance)
- The
kube-controller-manager
, kube-scheduler
, and cloud-controller-manager
instances that communicate with this server are at version 1.22 (this ensures they are not newer than the existing API server version, and are within 1 minor version of the new API server version)
kubelet
instances on all nodes are at version 1.22 or 1.21 (this ensures they are not newer than the existing API server version, and are within 2 minor versions of the new API server version)
- Registered admission webhooks are able to handle the data the new
kube-apiserver
instance will send them:
ValidatingWebhookConfiguration
and MutatingWebhookConfiguration
objects are updated to include any new versions of REST resources added in 1.23 (or use the matchPolicy: Equivalent
option available in v1.15+)
- The webhooks are able to handle any new versions of REST resources that will be sent to them, and any new fields added to existing versions in 1.23
Upgrade kube-apiserver
to 1.23
kube-controller-manager, kube-scheduler, and cloud-controller-manager
Pre-requisites:
- The
kube-apiserver
instances these components communicate with are at 1.23 (in HA clusters in which these control plane components can communicate with any kube-apiserver
instance in the cluster, all kube-apiserver
instances must be upgraded before upgrading these components)
Upgrade kube-controller-manager
, kube-scheduler
, and cloud-controller-manager
to 1.23
kubelet
Pre-requisites:
- The
kube-apiserver
instances the kubelet
communicates with are at 1.23
Optionally upgrade kubelet
instances to 1.23 (or they can be left at 1.22 or 1.21)
Note: Before performing a minor version
kubelet
upgrade,
drain pods from that node.
In-place minor version
kubelet
upgrades are not supported.
Warning: Running a cluster with kubelet
instances that are persistently two minor versions behind kube-apiserver
is not recommended:
- they must be upgraded within one minor version of
kube-apiserver
before the control plane can be upgraded
- it increases the likelihood of running
kubelet
versions older than the three maintained minor releases
kube-proxy
kube-proxy
must be the same minor version as kubelet
on the node.
kube-proxy
must not be newer than kube-apiserver
.
kube-proxy
must be at most two minor versions older than kube-apiserver.
Example:
If kube-proxy
version is 1.21:
kubelet
version must be at the same minor version as 1.21.
kube-apiserver
version must be between 1.21 and 1.23, inclusive.