The improvements to Aurora storage in this version limit the available upgrade paths from Aurora MySQL 1.23 to Aurora MySQL 2.*. When you upgrade an Aurora MySQL 1.23 cluster to 2.*, you must upgrade to Aurora MySQL 2.09.0 or later.
In GKE version 1.24 and later, Docker-based node image types are not supported. In GKE version 1.23, you also cannot create new node pools with Docker node image types. You must migrate to a containerd node image type. To learn more about this change, see About the Docker node image deprecation. You can check this documentation for reference.
The Amazon Elastic Kubernetes Service (Amazon EKS) team is happy to announce support for Kubernetes 1.23. Amazon EKS and Amazon EKS Distro can now run Kubernetes version 1.23, with support in Amazon EKS Anywhere launching soon after.
This release has several important changes including the Pod Security admission controller moving to beta, updates to the Container Storage Interface (CSI) Migration for working with AWS Elastic Block Store (Amazon EBS) volumes, and deprecation and removals of certain beta application programming interfaces (APIs) and the features detailed below. Notably, the CSI driver updates require some customers to take actions to ensure a smooth upgrade process. Kubernetes 1.23 also introduces Pod Security Standards (PSS) and Pod Security Admission (PSA), as well as the GA of Horizontal Pod Autoscaler (HPA) v2. Thank you for all the work the upstream Kubernetes 1.23 Release Team did to bring this release to the greater cloud-native ecosystem.
In Kubernetes 1.23, dual-stack IPv4/IPv6 networking is now GA, allowing for pods and services to be assigned addresses from both IPv4 and IPv6. However, Amazon EKS support for IPv6 (introduced in 1.21) focuses on resolving the IPv4 exhaustion problem from the limited IPv4 address space, which is a primary issue that Amazon EKS customers facing today. While dual-stack may help with migration use-cases, pods/services still consume an IPv4 address. With Amazon EKS in IPv6 mode, pods receive IPv6 addresses while maintaining the ability to route to IPv4 endpoints. To learn more about IPv6 mode and how it is distinct from the upstream dual-stack support, see our IPv6 section in the documentation.
CPU Manager Policy Options has graduated to beta in Kubernetes 1.23, meaning you can now configure additional options to fine tune the behavior of CPU Manager policies. With the default CPU allocation strategy, containers can be allocated as individual virtual cores, which can lead to containers sharing the same physical cores that potentially causing noisy neighbor issues. With 1.23, you can now configure the full-pcpus-only policy, which guarantees that pods will only be started when their central processing unit (CPU) requests can be fulfilled with full physical cores. Learn more in the CPU Management Policies section of the Kubernetes docs.
Linux containers in Kubernetes have been able to run in privileged mode, which allows them to access the host operating system for administrative capabilities. This feature is now available in 1.23 for Windows containers by setting the windowsOptions.hostProcess flag in the Pod specification. Although this mode is not recommended for most workloads, it can be helpful for certain security or monitoring purposes. Note that HostProcess pods run directly on the host, and all containers in these pods must run as HostProcess containers. Learn more about privileged mode for containers on the Kubernetes docs site.
To simplify the code base, several logging flags were marked as deprecated in Kubernetes 1.23. The code which implements them will be removed in a future release, so users of those need to start replacing the deprecated flags with some alternative solutions.
IPv4/IPv6 dual-stack networking graduates to GA. Since 1.21, Kubernetes clusters have been enabled to support dual-stack networking by default. In 1.23, the IPv6DualStack feature gate is removed. The use of dual-stack networking is not mandatory. Although clusters are enabled to support dual-stack networking, Pods and Services continue to default to single-stack. To use dual-stack networking Kubernetes nodes must have routable IPv4/IPv6 network interfaces, a dual-stack capable CNI network plugin must be used, Pods must be configured to be dual-stack and Services must have their .spec.ipFamilyPolicy field set to either PreferDualStack or RequireDualStack.
The feature to configure volume permission and ownership change policy for Pods moved to GA in 1.23. This allows users to skip recursive permission changes on mount and speeds up the pod start up time.
PodSecurity moves to Beta. PodSecurity replaces the deprecated PodSecurityPolicy admission controller. PodSecurity is an admission controller that enforces Pod Security Standards on Pods in a Namespace based on specific namespace labels that set the enforcement level. In 1.23, the PodSecurity feature gate is enabled by default.
Expression language validation for CRD is in alpha starting in 1.23. If the CustomResourceValidationExpressions feature gate is enabled, custom resources will be validated by validation rules using the Common Expression Language (CEL).
If the ServerSideFieldValidation feature gate is enabled starting 1.23, users will receive warnings from the server when they send Kubernetes objects in the request that contain unknown or duplicate fields. Previously unknown fields and all but the last duplicate fields would be dropped by the server.
If the OpenAPIV3 feature gate is enabled starting 1.23, users will be able to request the OpenAPI v3.0 spec for all Kubernetes types. OpenAPI v3 aims to be fully transparent and includes support for a set of fields that are dropped when publishing OpenAPI v2: default, nullable, oneOf, anyOf. A separate spec is published per Kubernetes group version (at the $cluster/openapi/v3/apis// endpoint) for improved performance and discovery, for all group versions can be found at the $cluster/openapi/v3 path.
A huge thank you to the release lead Rey Lejano for leading us through a successful release cycle, and to everyone else on the release team for supporting each other, and working so hard to deliver the 1.23 release for the community.
Join members of the Kubernetes 1.23 release team on January 4, 2022 to learn about the major features of this release, as well as deprecations and removals to help plan for upgrades. For more information and registration, visit the event page on the CNCF Online Programs site.
Container Engine for Kubernetes now supports Kubernetes version 1.23.4, in addition to versions 1.22.5 and 1.21.5. Oracle recommends you upgrade your Kubernetes environment to version 1.23.4. For more information about Kubernetes 1.23.4, see the Kubernetes Changelog.
NumPy 1.23.4 is a maintenance release that fixes bugs discovered after the1.23.3 release and keeps the build infrastructure current. The mainimprovements are fixes for some annotation corner cases, a fix for a long timenested_iters memory leak, and a fix of complex vector dot for very largearrays. The Python versions supported for this release are 3.8-3.11.
In Rust 1.23, these methods are now defined directly on those types, and so you no longer needto import the trait. Thanks to our stability guarantees, this trait still exists, so if you'dlike to still support Rust versions before Rust 1.23, you can do this:
The first step to converting 1.23 to a fraction is to re-write 1.23 in the form p/q where p and q both are positive integers. To start with, 1.23 can be written as simply 1.23/1 to technically be written as a fraction.
Next, we will count the number of fractional digits after the decimal point in 1.23, which in this case is 2. For however many digits after the decimal point there are, we will multiply the numerator and denominator of 1.23/1 each by 10 to the power of that many digits. For instance, for 0.45, there are 2 fractional digits so we would multiply by 100; or for 0.324, since there are 3 fractional digits, we would multiply by 1000. So, in this case, we will multiply the numerator and denominator of 1.23/1 each by 100:
1.23 has several database changes since 1.22, and will not work without schema updates. Note that due to changes to some very large tables like the revision table, the schema update may take quite long (minutes on a medium sized site, many hours on a large site).
MediaWiki 1.23 is an obsolete long-term support release of MediaWiki.Consult the RELEASE NOTES file for the full list of changes.It was deployed on Wikimedia Foundation wikis through incremental "wmf"-branches starting October 2013.The 1.23.0 stable release was released on June 5, 2014.Download the latest release or checkout the REL1_23 branch in Git to follow this release.This is a Long Term Support release (LTS) which was supported until end of May 2017.
With 1.23, MediaWiki starts to behave more like a modern website in regards to notifications to keep the editors of your wiki engaged and always up to date about what interests them.This used to require several custom settings.
In the Dynatrace documentation it is stated that new nginx versions are usually supported after 2 weeks. We have had success with nginx 1.23.1. However, 1.23.2 seems to break injection with "Process is ready to be monitored, waiting for injection status... " on the OneAgent deployment status page.
Nginx version 1.23.2 was released on 2022-10-19 which is well over 1,5 month ago now, well over the 2 week announced time. And I can't find any reference to nginx addition in the latest release notes.
For more details about each control, including detailed descriptions and remediations for failing tests, you can refer to the corresponding section of the CIS Kubernetes Benchmark v1.23 You can download the benchmark after logging in to CISecurity.org.
Remediation:When running RKE2 with the profile flag set to cis-1.23, RKE2 will refuse to start if the etcd user and group doesn't exist on the host. If it does exist, RKE2 will automatically set the ownership of the etcd data directory to etcd:etcd and ensure the etcd static pod is started with that user and group.
aa06259810