feat(helm): update chart openebs ( 4.2.0 → 4.3.0 ) #627
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
4.2.0
->4.3.0
Warning
Some dependencies could not be looked up. Check the Dependency Dashboard for more information.
Release Notes
openebs/openebs (openebs)
v4.3.0
Compare Source
OpenEBS 4.3.0 Release Notes
Release Summary
OpenEBS version 4.3 introduces several functional fixes and new features focused on improving Data Security, User Experience, High availability (HA), replica rebuilds, and overall stability. The key highlights are Mayastor's support for At-Rest data encryption and a new Openebs plugin thats allows users to interact with all engines supplied by OpenEBS project. In addition, the release includes various usability and functional fixes for mayastor, ZFS, LocalPV LVM and LocalPV Hostpath provisioners, along with documentation enhancements to help users and new contributors get started quickly.
Change Summary
kubectl openebs plugin
Local Storage
ZFS Enhancements:
ZFS Fixes:
LVM Enhancements:
LocalPV Hostpath Improvements:
LocalPV Release Notes
LocalPV-ZFS:
Refer to the LocalPV-ZFS v2.8.0 release for detailed changes.
LocalPV-LVM:
Refer to the LocalPV-LVM v1.7.0 release for detailed changes.
LocalPV Hostpath:
Refer to the LocalPV Hostpath v4.3.0 release for detailed changes.
Known Issues for LocalPV
Controller Pod Restart on Single Node Setups:
After upgrading, single node setups may face issues where the ZFS-localpv/LVM-localpv controller pod does not enter the Running state due to changes in the controller manifest (now a Deployment) and missing affinity rules.
Workaround: Delete the old controller pod to allow the new pod to be scheduled correctly. This does not happen if upgrading from the previous release of ZFS-localpv/LVM-localpv.
Thin pool issue with LocalPV-LVM
We do not unmap/reclaim Thin pool capacity. It is not tracked in lvmnode cr also which can cause unexpected behaviour when working with thin volumes. Refer (When using lvm thinpool type, csistoragecapacities calculation is incorrect · Issue #382 · openebs/lvm-localpv)
Limitations for LocalPV
LocalPV-LVM:
LVM-localpv has support for volume snapshot. But it doesn't support restore from a snapshot yet. It is in our roadmap.
Replicated Storage (Mayastor)
Support for at-rest data encryption:
OpenEBS offers support for data-at-rest encryption to help ensure the confidentiality of persistent data stored on disk.
With this capability, any disk pool configured with a user-defined encryption key can host encrypted volume replicas.
This feature is particularly beneficial in environments requiring compliance with regulatory or security standards.
Additional Enhancements:
Upgrade and Backward Incompatibilities
Kubernetes Requirement: Kubernetes 1.23 or higher is recommended.
Engine Compatibility: Upgrades to OpenEBS 4.3.0 are supported only for the following engines:
Replicated PV Mayastor Release Notes
Refer to the Mayastor v2.9.0 release for detailed changes.
Known Behavioural Limitations
Known Issues - Replicated PV (Mayastor)
DiskPool Capacity Expansion:
Mayastor does not support the capacity expansion of DiskPools as of v2.9.0.
fsfreeze Operation Failure:
If a pod-based workload is scheduled on a node that reboots and the pod lacks a controller (such as a Deployment or StatefulSet), the volume unpublish operation might not be triggered. This leads the control plane to assume the volume is still published, causing the fsfreeze operation to fail during snapshot creation.
Workaround: Recreate or reinstate the pod to ensure proper volume mounting.
Diskpool's backing device failure
If the backend device that hosts a diskpool runs into a fault, or gets removed e.g cloud disk removal, the status of diskpool and hosted replicas isn't clearly updated to reflect the problem. As a result the resultant failures aren't gracefully handled and volume might remain Degraded for an extended period of time.
Extremely large pool undergoing dirty shutdown
In case of a dirty shutdown of io-engine node hosting an extremely large pool e.g 10TiB or 20TiB, the recovery of pool hangs after the node comes online.
Extremely large filesystem volumes fail to provision
Filesystems volumes of sizes ranging in Terabytes e.g. more than 15TiB fails to provision successfully due to filesystem formatting getting hung.
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR was generated by Mend Renovate. View the repository job log.