Skip to content

Releases: openyurtio/openyurt

v1.3.3

04 Jul 09:13
1896468
Compare
Choose a tag to compare

What's Changed

  • [Backport release-v1.3] change access permission to default in general. by @github-actions in #1584
  • [Backport release-v1.3] feat: add yurtadm binaries release workflow by @github-actions in #1586

Full Changelog: v1.3.2...v1.3.3

v1.3.2

21 Jun 01:54
0c6489f
Compare
Choose a tag to compare

What's Changed

  • [Backport release-v1.3] check nil in podbinding_controller.go and handle some errors by @github-actions in #1560
  • [Backport release-v1.3] fix yurtstaticset workerpod reset error by @github-actions in #1561
  • [Backport release-v1.3] fix yurthub memory leak by @github-actions in #1562

Full Changelog: v1.3.1...v1.3.2

v1.3.1

24 May 08:17
071d86d
Compare
Choose a tag to compare

What's Changed

  • [Backport release-v1.3] fix memory leak for yur-tunnel-server by @github-actions in #1493
  • [Backport release-v1.3] revert yurt-tunnel components release by @github-actions in #1494

Full Changelog: v1.3.0...v1.3.1

v1.3.0

15 May 02:14
ac6ce54
Compare
Choose a tag to compare

v1.3.0

What's New

Refactor OpenYurt control plane components

In order to improve the management of all repos in OpenYurt community, and reduce the complexity of installing OpenYurt,
After detailed discussions in the community, a new component named yurt-manager was agreed to manage controllers and webhooks
scattered across multiple components (like yurt-controller-manager, yurt-app-manager, raven-controller-manager, etc.).

After the refactoring, based on the controller-runtime framework, new controllers and webhooks can be easily added to the
yurt-manager component in the future. Also note that the yurt-manager component is recommended to be installed on the same node as the K8s control-plane component (like kube-controller-manager). #1067

Support OTA or AdvancedRollingUpdate upgrade models for static pods

As you know, static pods are managed directly by the kubelet daemon on the node and there is no APIServer watching them.
In general, if a user wants to upgrade a static pod(like YurtHub), the user should manually modify or replace the manifest
of the static pod. this can be a very tedious and painful task when the number of static pods becomes very large.

Users can define Pod templates and upgrade models through YurtStaticSet CRD. The upgrade models support both OTA and AdvancedRollingUpdate kinds,
thus easily meeting the upgrade needs of large-scale Static Pods. Also the Pod template in yurthub YurtStaticSet CRD is used to
install YurtHub component on the node when the node is joined. #1261, #1168, #1172

NodePort Service supports nodepool isolation

In edge scenarios, users using the NodePort service expect to listen to nodePort ports only in a specified nodepools
in order to prevent port conflicts and save edge resources.

Users can specify the nodepools to listen to by adding annotation nodeport.openyurt.io/listen to the NodePort or
LoadBalancer service, thus getting the nodepool isolation capability of the NodePort or LoadBalancer service. #1183, #1209

Other Notable changes

Fixes

Contributors

Thank you to everyone who contributed to this release!

And thank you very much to everyone else not listed here who contributed in other ways like filing issues,
giving feedback, helping users in community group, etc.

Full Changelog: v1.2.0...v1.3.0

v1.3.0-rc1

21 Apr 08:38
48f7c79
Compare
Choose a tag to compare
v1.3.0-rc1 Pre-release
Pre-release

The release candidate for OpenYurt v1.3.0

v1.2.1

17 Feb 02:35
8b41abc
Compare
Choose a tag to compare

What's Changed

  • [Backport release-v1.2] Bugfix: fix handle poolcoordinator certificates in case of restarting… by @github-actions in #1193
  • [Backport release-v1.2] fix: re-list when target change by @github-actions in #1238
  • [Backport release-v1.2] fix unavailable coordinator image by @github-actions in #1239
  • [Backport release-v1.2] fix: pool-coordinator cannot be rescheduled when its node fails (#1212) by @github-actions in #1240
  • [Backport release-v1.2] Fix: make rename replace old dir by @github-actions in #1241

Full Changelog: v1.2.0...v1.2.1

v1.2.0

30 Jan 12:05
33adccc
Compare
Choose a tag to compare

v1.2.0

What's New

Improve edge autonomy capability when cloud-edge network off

The original edge autonomy feature can make the pods on nodes un-evicted even if node crashed by adding annotation to node,
and this feature is recommended to use for scenarios that pods should bind to node without recreation.
After improving edge autonomy capability, when the reason of node NotReady is cloud-edge network off, pods will not be evicted
because leader yurthub will help these offline nodes to proxy their heartbeats to the cloud via pool-coordinator component,
and pods will be evicted and recreated on other ready node if node crashed.

By the way, The original edge autonomy capability by annotating node (with node.beta.openyurt.io/autonomy) will be kept as it is,
which will influence all pods on autonomy nodes. And a new annotation (named apps.openyurt.io/binding) can be added to workload to
enable the original edge autonomy capability for specified pod.

Reduce the control-plane traffic between cloud and edge

Based on the Pool-Coordinator in the nodePool, A leader Yurthub will be elected in the nodePool. Leader Yurthub will
list/watch pool-scope data(like endpoints/endpointslices) from cloud and write into pool-coordinator. then all components(like kube-proxy/coredns)
in the nodePool will get pool-scope data from pool-coordinator instead of cloud kube-apiserver, so large volume control-plane traffic
will be reduced.

Use raven component to replace yurt-tunnel component

Raven has released version v0.3, and provide cross-regional network communication ability based on PodIP or NodeIP, but yurt-tunnel
can only provide cloud-edge requests forwarding for kubectl logs/exec commands. because raven provides much more than the capabilities
provided by yurt-tunnel, and raven has been proven by a lot of work. so raven component is officially recommended to replace yurt-tunnel.

Other Notable changes

Fixes

  • bugfix: StreamResponseFilter of data filter framework can't work if size of one object is over 32KB by @rambohe-ch in #1066
  • bugfix: add ignore preflight errors to adapt kubeadm before version 1.23.0 by @YTGhost in #1092
  • bugfix: dynamically switch apiVersion of JoinConfiguration to adapt to different versions of k8s by @YTGhost in #1112
  • bugfix: yurthub can not exit when SIGINT/SIGTERM happened by @rambohe-ch in #1143

Contributors

Thank you to everyone who contributed to this release!

And thank you very much to everyone else not listed here who contributed in other ways like filing issues,
giving feedback, helping users in community group, etc.

v1.1.0

08 Nov 02:31
23f45da
Compare
Choose a tag to compare

v1.1.0

What's New

Support OTA/Auto upgrade model for DaemonSet workload

Extend native DaemonSet OnDelete upgrade model by providing OTA and Auto two upgrade models.

  • OTA: workload owner can control the upgrade of workload through the exposed REST API on edge nodes.
  • Auto: Solve the DaemonSet upgrade process blocking problem which caused by node NotReady when the cloud-edge is disconnected.

Support autonomy feature validation in e2e tests

In order to test autonomy feature, network interface of control-plane is disconnected for simulating cloud-edge network disconnected, and then stop components(like kube-proxy, flannel, coredns, etc.) and check the recovery of these components.

Improve the Yurthub configuration for enabling the data filter function

Compares to the previous three configuration items, which include the component name, resource, and request verb. after improvement, only component name is need to configure for enabling data filter function. the original configuration format is also supported in order to keep consistency.

Other Notable changes

Fixes

  • even no endpoints left after filter, an empty object should be returned to clients by @rambohe-ch in #1028
  • non resource handle miss for coredns by @rambohe-ch in #1044

Contributors

Thank you to everyone who contributed to this release!

And thank you very much to everyone else not listed here who contributed in other ways like filing issues, giving feedback, helping users in community group, etc. ️

Full Changelog: v1.0.0...v1.1.0

v1.0.1

17 Oct 02:49
d98ecfb
Compare
Choose a tag to compare

What's Changed

  • [Backport release-v1.0] bugfix: even no endpoints left after filter, an empty object should be returned to clients by @github-actions in #1030

Full Changelog: v1.0.0...v1.0.1

v1.0.0

09 Sep 08:26
b899f84
Compare
Choose a tag to compare

We're excited to announce the release of OpenYurt 1.0.0!🎉🎉🎉

Thanks to all the new and existing contributors who helped make this release happen!

If you're new to OpenYurt, feel free to browse OpenYurt website, then start with OpenYurt Installation and learn about its core concepts.

Acknowledgements ❤️

Nearly 20 people have contributed to this release and 8 of them are new contributors, Thanks to everyone!

@huiwq1990 @Congrool @zhangzhenyuyu @rambohe-ch @gnunu @LinFCai @guoguodan @ankyit @luckymrwang @zzguang @HXCGIT @Sodawyx @luc99hen @River-sh @slm940208 @windydayc @lorrielau @fujitatomoya @donychen1134

What's New

API version

The version of NodePool API has been upgraded to v1beta1, more details in the openyurtio/yurt-app-manager#104

Meanwhile, all APIs management in OpenYurt will be migrated to openyurtio/api repo, and we recommend you to import this package to use APIs of OpenYurt.

Code Quality

We track unit test coverage with CodeCov
Code coverage for some repos as following:

  • openyurtio/openyurt: 47%
  • openyurtio/yurt-app-manager: 37%
  • openyurtio/raven: 53%

and more details of unit tests coverage can be found in https://codecov.io/gh/openyurtio

In addition to unit tests, other levels of testing are also added.

Performance Test

OpenYurt makes Kubernetes work in cloud-edge collaborative environment with a non-intrusive design. so performance of
some OpenYurt components have been considered carefully. several test reports have been submitted so that end users can clearly
see the working status of OpenYurt components.

Installation Upgrade

early installation way(convert K8s to OpenYurt) is removed. OpenYurt Cluster installation is divided into two parts:

and all Control Plane Components of OpenYurt are managed by helm charts in repo: https://github.com/openyurtio/openyurt-helm

Other Notable changes

Full Changelog: v0.7.0...v1.0.0-rc1

Thanks again to all the contributors!