The upgrade includes hundreds of improvements and fixes, and features that include reconfigurable datanode parameters, DFSAdmin option to initiate bulk reconfiguration operations on all live datanodes, and a vectored API that allows seek-heavy readers to specify multiple ranges to read. Hadoop 3.3.6 also adds support for HDFS APIs and semantics for its write-ahead log (WAL), so that HBase can run on other storage system implementations. For more information, see the changelogs for versions 3.3.4, 3.3.5, and 3.3.6 in the Apache Hadoop documentation.
This release improves the management of ZooKeeper transaction log files that are maintained on primary nodes to minimize scenarios where the log files grow out of bounds and interrupt cluster operations.
Tez in Amazon EMR 6.15.0 introduces configurations that you can specify to asynchronously open the input splits in a Tez grouped split. This results in faster performance of read queries when there are a large number of input splits in a single Tez grouped split. For more information, see Tez asynchronous split opening.
When you launch a cluster with the latest patch release of Amazon EMR 5.36 or higher, 6.6 or higher, or 7.0 or higher, Amazon EMR uses the latest Amazon Linux 2023 or Amazon Linux 2 release for the default Amazon EMR AMI. For more information, see Using the default Amazon Linux AMI for Amazon EMR.
Amazon EMR releases 6.12.0 and higher support all applications with Amazon Corretto 8 by default, except for Trino. For Trino, Amazon EMR supports Amazon Corretto 17 by default starting with Amazon EMR release 6.9.0. Amazon EMR also supports some applications with Amazon Corretto 11 and 17. Those applications are listed in the following table. If you want to change the default JVM on your cluster, follow the instructions in Configure applications to use a specific JavaVirtual Machine for each application that runs on the cluster. You can only use one Java runtime version for a cluster. Amazon EMR doesn't support running different nodes or applications on different runtime versions on the same cluster.
While Amazon EMR supports both Amazon Corretto 11 and 17 on Apache Spark, Apache Hadoop, and Apache Hive, performance might regress for some workloads when you use these versions of Corretto. We recommend that you test your workloads before you change defaults.
The components that Amazon EMR installs with this release are listed below. Some are installed as part of big-data application packages. Others are unique to Amazon EMR and installed for system processes and features. These typically start with emr or aws. Big-data application packages in the most recent Amazon EMR release are usually the latest version found in the community. We make community releases available in Amazon EMR as quickly as possible.
Some components in Amazon EMR differ from community versions. These components have a version label in the form CommunityVersion-amzn-EmrVersion. The EmrVersion starts at 0. For example, if open source community component named myapp-component with version 2.2 has been modified three times for inclusion in different Amazon EMR releases, its release version is listed as 2.2-amzn-2.
Configuration classifications allow you to customize applications. These often correspond to a configuration XML file for the application, such as hive-site.xml. For more information, see Configure applications.
Reconfiguration actions occur when you specify a configuration for instance groups in a running cluster. Amazon EMR only initiates reconfiguration actions for the classifications that you modify. For more information, see Reconfigure an instance group in arunning cluster.
Restarts the Hadoop HDFS services Namenode, SecondaryNamenode, Datanode, ZKFC, and Journalnode.Restarts the Hadoop YARN services ResourceManager, NodeManager, ProxyServer, and TimelineServer.Additionally restarts Hadoop KMS, Ranger KMS, HiveServer2, Hive MetaStore, Hadoop Httpfs, and MapReduce-HistoryServer.
Restarts the Hadoop HDFS services Namenode, SecondaryNamenode, Datanode, ZKFC, and Journalnode.Restarts the Hadoop YARN services ResourceManager, NodeManager, ProxyServer, and TimelineServer.Additionally restarts HBaseRegionserver, HBaseMaster, HBaseThrift, HBaseRest, HiveServer2, Hive MetaStore, Hadoop Httpfs, and MapReduce-HistoryServer.
Restarts the Hadoop HDFS services Namenode, SecondaryNamenode, Datanode, ZKFC, and Journalnode.Restarts the Hadoop YARN services ResourceManager, NodeManager, ProxyServer, and TimelineServer.Additionally restarts PhoenixQueryserver, HiveServer2, Hive MetaStore, and MapReduce-HistoryServer.
Restarts the Hadoop HDFS services SecondaryNamenode, Datanode, and Journalnode.Restarts the Hadoop YARN services ResourceManager, NodeManager, ProxyServer, and TimelineServer.Additionally restarts Hadoop KMS, Hadoop Httpfs, and MapReduce-HistoryServer.
As the title says, I wanted to know if it was possible to import a Confluence space from a newer version of confluence server, in this case 6.15.9 to a an older version of confluence server, in this case 6.4.2. The Confluence instances exist on separate servers, and both of which are out of my direct administrative control.
I looked into this documentation, and they warn about importing FROM older versions. I can't seem to find an answer as to if it is possible to import TO older versions. Any help would greatly be appreciated
The docs all say "no", but this, I think, is Atlassian pushing you to just use the latest version. That's not a bad thing to do, newer versions are almost always better than older ones, but in the real world, we can't just upgrade every time, and people to have older versions.
So, I can't recommend you do it, and I can't say that it will work at all (let alone be flawless), but I have never had a problem moving or copying a space from any any major version to any other version of the same major version since Confluence 4.0.
So, with a minor edit to lie about the export version in the xml (the xml reports the export version - use a text editor to change it to the target version), I would be pretty sure that 6.15 to 6.4 should work mostly fine (note - I'm assuming both systems have the same apps added to them). I can't say the same for 6.anything down to 5.anything, but 6 to 6 has, in my experience, worked fine.
When I google that error I find some docs saying it might be a corrupt config file but seeing as it remains untouched in a persistent volume during upgrading and still works if I revert back to 6.15.7. I expect that is not actually the issue here.
Pretty sure I figured it out, ended up being a premissing error.
A change made earlier this month switched from running the confluence appdata dir under the built in deamon user, to creating a new confluence user with uid:gid of 2002:2002 during startup and running under that. Also I'm pretty sure the confluence docker image used to always run a chown on the /var/atlassian/application-data/confluence/ dir during startup to prevent issues like this but I guess that isn't working or has been removed.
I ended up just running chown on the host system to change the permission of the persistent volume before starting up confluence 6.15.8. On my host system my persistent volume is mounted under /home/ec2-user/confluence/ so I ran the following cmd for a fix:
The vertical ellipses on the host details page have been updated. These actions will force a package profile upload on the host through remote execution, ensuring that the applicability calculation is up-to-date.
Previously, provisioning templates used the Katello CA Consumer to register hosts during provisioning, which is deprecated and not compatible with newer RHEL systems. With this release, provisioning templates register hosts by using the same method as the Global Registration template, because they include the shared subscription_manager_setup snippet.
If your Capsules have synchronized content enabled, you can refresh the number of content counts available to the environments associated with the Capsule. This displays the Content Views inside those environments available to the Capsule. You can then expand the Content View to view the repositories associated with that Content View version.
You can now install and enable fapolicyd on Satellite Server and Capsule Server. The fapolicyd software framework is one of the most efficient ways to prevent running untrusted and possibly malicious applications on the system.
Admin users can now see the end of life (EOL) date in the Satellite web UI if the EOL date of the Satellite version is within the next 6 months. This information displays as a warning banner. The warning banner changes to an error banner if the Satellite version is past the EOL date. You can dismiss the banners and they reappear after one month or on the EOL date.
When you register a host using Hosts > Register Host in the Satellite web UI and there is only one activation key available for the organization and location selected in the registration form, Satellite selects the activation key automatically.
Previously, when background actions such as repository synchronization failed, users had to log in to the Satellite Web UI to learn about the failures. With this update, you can configure email notifications for the following events: failed content view promotion, failed content view publish, failed Capsule sync, and failed repository sync.
To start receiving the notifications, log in to the Satellite Web UI and navigate to Administer > Users. Select the required user, switch to the Email Preferences tab, and specify the required notifications. Make sure that the Mail Enabled checkbox on the Email Preferences tab is selected. Note that users whose accounts are disabled do not receive any notification emails.
b37509886e