USB Loader GX 4.3 Updated Version.43

0 views
Skip to first unread message
Message has been deleted

Genciana Haggins

unread,
Jul 9, 2024, 12:23:55 AM7/9/24
to cartdownchrisan

This release brings some important bug fixes, especially around table migrations. Table migrations are an important feature of RDB Loader: if you update an Iglu schema (e.g. from version 1-0-0 to 1-0-1) then the loader automatically alters the target table to accommodate the newer events. However, we discovered a number of edge cases where migrations did not work as expected.

If you update an Iglu schema by raising the maxLength setting for a string field, then RDB Loader should respond by altering the table e.g. from VARCHAR(10) to VARCHAR(20). Because of this bug, RDB Loader did not attempt to alter the column length; it would instead attempt to load the newer events into the table without running the migrations. You might be affected by this bug if you have recently updated an Iglu schema by raising the max length of a field. If you think you have been affected by this bug, we suggest you check your entity tables and manually alter the table if needed:

USB Loader GX 4.3 Updated Version.43


Download https://tlniurl.com/2yLwJV



If a table migration is immediately followed by a batch which cannot be loaded for any reason, then a table could be left in an inconsistent state where a migration was partially applied. If this ever happened, then RDB Loader could get stuck on successive loads with error messages like:

It is possible and completely allowed for a batch of events to contain multiple versions of the same schema, e.g. both 1-0-0 and 1-0-1. However, because of this bug, the loader was in danger of trying to perform table migrations twice. This could result in an error message like (same error as in previous case):

This is a new feature you can benefit from if you load into Snowflake with RDB Loader. The Snowflake loader allows two alternative methods for authentication between the warehouse and the S3 bucket: either using Snowflake storage integration, or using temporary credentials generated with AWS STS. Previously, you were forced to pick the same method for loading events and for folder monitoring. With this change, it is possible to use the storage integration for loading events, but temporary credentials for folder monitoring. This is beneficial if you want the faster load times from using a storage integration, but do not want to go through the overhead of setting up a storage integration just for folder monitoring.

This is a low-impact bug that is not expected to have any detrimental effect on loading. It could affect your pipeline if you load into Snowflake or Databricks, and if your warehouse is set to have a non-UTC timezone by default.

This bug affects the manifest table, which is the table the loader uses to track which batches have been loaded already. Because of this bug, timestamps in the manifest table were stored using the default timezone of the warehouse, not UTC. This bug could only affect you in the unlikely case you use the manifest table for some other purpose.

If you are already using a recent version of RDB Loader (3.0.0 or higher) then upgrading to 4.3.0 is as simple as pulling the newest docker images. There are no changes needed to your configuration files.

Red Hat OpenShift Container Platform provides developers and IT organizations with a hybrid cloud application platform for deploying both new and existing applications on secure, scalable resources with minimal configuration and management overhead. OpenShift Container Platform supports a wide selection of programming languages and frameworks, such as Java, JavaScript, Python, Ruby, and PHP.

OpenShift Container Platform (RHSA-2022:5069) is now available. This release uses Kubernetes 1.24 with CRI-O runtime. New features, changes, and known issues that pertain to OpenShift Container Platform 4.11 are included in this topic.

OpenShift Container Platform 4.11 clusters are available at With the Red Hat OpenShift Cluster Manager application for OpenShift Container Platform, you can deploy OpenShift clusters to either on-premises or cloud environments.

The scope of support for layered and dependent components of OpenShift Container Platform changes independently of the OpenShift Container Platform version. To determine the current support status and compatibility for an add-on, refer to its release notes. For more information, see the Red Hat OpenShift Container Platform Life Cycle Policy.

RHCOS now uses Red Hat Enterprise Linux (RHEL) 8.6 packages in OpenShift Container Platform 4.11 and above. This enables you to have the latest fixes, features, and enhancements, as well as the latest hardware support and driver updates.

The redirector hostname for downloading RHCOS boot images is now rhcos.mirror.openshift.com. You must configure your firewall to grant access to the boot images. For more information, see Configuring your firewall for OpenShift Container Platform.

This release updates the minimum system requirements for installing OpenShift Container Platform on a single node. When installing OpenShift Container Platform on a single node, you should configure a minimum of 16 GB of RAM. Specific workload requirements can require additional RAM. The complete list of supported platforms has been updated to include bare metal, vSphere, Red Hat OpenStack Platform (RHOSP), and Red Hat Virtualization platforms. In all cases, you must specify the platform.none: parameter in the install-config.yaml configuration file when the openshift-installer binary is being used to install single-node OpenShift.

OpenShift Container Platform 4.11 is now supported on ARM architecture based AWS user-provisioned infrastructure and bare-metal installer-provisioned infrastructure. For more information about instance availability and installation documentation, see Supported installation methods for different platforms.

By default, the installation program now deploys a Microsoft Azure cluster using Hyper-V generation version 2 virtual machines (VMs). If the installation program detects that the instance type selected for the VMs does not support version 2, it uses version 1 for the deployment.

OpenShift Container Platform 4.11 introduces support for the AWS Secret Commercial Cloud Services (SC2S) region. You can now install and update OpenShift Container Platform clusters in the us-isob-east-1 SC2S region.

OpenShift Container Platform 4.11 introduces support for installing a cluster on Nutanix using installer-provisioned infrastructure. This type of installation lets you use the installation program to deploy a cluster on infrastructure that the installation program provisions and the cluster maintains.

You can now enable Ultra SSD storage when installing OpenShift Container Platform on Azure. This feature requires that both the Azure region and zone where you install OpenShift Container Platform offer Ultra storage.

When deploying an installer-provisioned OpenShift Container Platform cluster on bare metal with static IP addresses and no DHCP server on the baremetal network, you must specify a static IP address for the bootstrap VM and the static IP address of the gateway for the bootstrap VM. OpenShift Container Platform 4.11 provides the bootstrapExternalStaticIP and the bootstrapExternalStaticGateway configuration settings, which you can set in the install-config.yaml file before deployment. The introduction of these settings replaces the workaround procedure Assigning a bootstrap VM an IP address on the baremetal network without a DHCP server from the OpenShift Container Platform 4.10 release.

OpenShift Container Platform 4.11 introduces support for configuring the BIOS and RAID arrays of control plane nodes when installing OpenShift Container Platform on bare metal with Fujitsu hardware. In OpenShift Container Platform 4.10, configuring the BIOS and RAID arrays on Fujitsu hardware was limited to worker nodes.

You can use the oc-mirror OpenShift CLI (oc) plugin to mirror images in a disconnected environment. This feature was previously introduced as a Technology Preview in OpenShift Container Platform 4.10 and is now generally available in OpenShift Container Platform 4.11.

If you used the Technology Preview version of the oc-mirror plugin for OpenShift Container Platform 4.10, it is not possible to migrate your mirror registry to OpenShift Container Platform 4.11. You must download the new oc-mirror plugin, use a new storage back end, and use a new top-level namespace on the target mirror registry.

OpenShift Container Platform 4.11 on Azure provides accelerated networking for control plane and compute nodes. Accelerated networking is enabled by default for supported instance types in an installer-provisioned infrastructure installation.

You are no longer required to configure AWS VPC endpoints when installing a restricted OpenShift Container Platform cluster on AWS. While configuring VPC endpoints remains an option, you can also choose to configure a proxy without VPC endpoints or configure a proxy with VPC endpoints.

OpenShift Container Platform 4.11 allows you to disable the installation of the baremetal and marketplace Operators, and the openshift-samples content that is stored in the openshift namespace. You can disable these features by adding the baselineCapabilitySet and additionalEnabledCapabilities parameters to the install-config.yaml configuration file prior to installation. If you disable any of these capabilities during the installation, you can enable them after the cluster is installed. After a capability has been enabled, it cannot be disabled again.

Components with versions earlier than those above are deprecated or removed. Deprecated versions are still fully supported, but Red Hat recommends that you use ESXi 7.0 Update 2 or later and vSphere 7.0 Update 2 up to but not including version 8. vSphere 8 is not supported.

OpenShift Container Platform 4.11 introduces clusters with multi-architecture compute machines support using Azure installer-provisioned infrastructure in Technology Preview. This feature offers, as a day-two operation, the ability to add arm64 compute nodes to an existing x86_64 Azure cluster that is installer provisioned with a multi-architecture installer binary. You can add arm64 compute nodes to your cluster by creating a custom Azure machine set that uses a manually generated arm64 boot image. Control planes on arm64 architectures are not currently supported. For more information, see Configuring a multi-architecture cluster.

7fc3f7cf58
Reply all
Reply to author
Forward
0 new messages