Cloudberry S3 Explorer

2 views
Skip to first unread message

Jackie Bullinger

unread,
Aug 3, 2024, 11:19:04 AM8/3/24
to imalspeedhand

I've written a powershell script to upload from a windows system to an Amazon S3 Bucket. The script successfully uploads all files, except those over 5Gb. I have the Cloud Berry Explorer Pro license which allows for multipart upload on files up to 5TB. However there is no flag for multipart on the powershell snapin documentation. Cloudberry support directed me here as they only support the GUI not the powershell snapin. When running my script I get the error

I believe in GUI, the original chunking mechanism has been depreciated. I have not tested myself, but I assume Powershell option UseChunks=true is still using the old mechanism? If so, files may be split into multiple parts and not automatically recombined when they arrive on S3. The new GUI Multipart Upload facility sorts this all out for you.

We did purchase the cloudberry explorer pro license for the native multipart upload capability, but we wanted to automate it. I believe based on their documentation that the old chunk method is deprecated in favor of their new multi-part functionality. We wound up testing the options listed in the powershell documentation. Those options are as follows:

We verified that this was successfully uploading files beyond the 5GB restriction to our S3 bucket. I attempted to get a response from Cloudberry as to whether this was the old Chunking method or the new Multi-Part method, but I was unable to get a straight answer. They confirmed that because we were using pro, this powershell option was supported, but they failed to confirm which option the powershell command was using.

From what I can tell it appears that Cloudberry's legacy chunking mechanism would actually just break the file into individual files and thus would appear in S3 as multiple files. The Chunk Transparency mechanism in Cloudberry Explorer would allow the multiple chunks to appear as a single file in the Cloudberry Explorer GUI only. Since I can see the file as a single file on the S3 side, I'm assuming that the powershell option uses the new Multi-Part functionality and not the legacy Chunking functionality. Again I was not able to confirm this through Cloudberry so it's speculation on my part.

I've followed Tom Kuhlman's instructions on setting up an account, but I want to access the account from a different computer, so what I wondered is if I set up the cloudberry explorer on a different computer, do I just get a new set of access keys from my amazon account or do I use the original access keys ?

This is the third of our sponsored series on CloudBerry's cloud backup, storage and synchronization services. In this episode, we'll dive into CloudBerry Explorer, a Windows application that make accessing and managing your data in the cloud much easier. If you'd like you can read part one: Backup to Amazon S3, Azure and Google With CloudBerry Backup and part two: CloudBerry Box: Securely Synchronize Data From Windows in the Amazon, Azure and HP Clouds.

CloudBerry Explorer provides a fully featured file explorer user interface to your cloud storage accounts at Amazon S3 and Glacier, allowing you to access, move, compare, manage and script files across your local storage and remote cloud repositories. It also works well with other cloud storage providers such as Microsoft Azure, Google Cloud and OpenStack.

The exact feature set varies a bit depending on your cloud storage provider. Here's a feature summary for the free Amazon S3 Explorer application. This gives you an idea of the depth of features Explorer can add to your cloud storage usage scenarios.

If you'd like to read more about the evolution of Explorer, check out the CloudBerry Explorer Blog. The recent release is summarized in Introducing CloudBerry Explorer 4.0.6, which added support for Amazon Glacier Range Retrieval, Data Retrieval Policies and GovCloud support.

I hope you've enjoyed our series on CloudBerry Labs. If you missed our earlier episodes and Backup to Amazon S3, Azure and Google With CloudBerry BackupCloudBerry Box: Securely Synchronize Data From Windows in the Amazon, Azure and HP Clouds check them out.

Please feel free to add your questions and comments below; I generally participate in the discussions. If you'd like to know when my next tutorial arrives, check my instructor page. You can also reach me on Twitter @reifman or email me directly.

I have spent all day today trying to find some tutorial or documentation to get me started. I have found lots of links for Amazon S3 SDK, for NetApp management, etc. but nothing for a developer with an on-prem S3 NetApp system. Any good information would be much appreciated.

Hi Alex, first thanks for your reply. For your question, I cannot really answer, maybe due to my ignorance on the subject, maybe there is something I should ask the admins who did set up our beta S3 system? Or I would try to make it more clear for you and ask you: is any of the two options that you mention something that I can use in my .NET programming to to access and work with the our on-prem S3 storage? To make it clear, I am not looking to manage the S3 system, meaning I am not the admin of the on-prem S3 storage (which by my limited knowledge I understand that the ONTAP is for - management/admin). I am looking to work with the S3 on-prem system as a user (meaning basically CRUD operations with S3 stored objects). I need some SDK that I can use in Visual Studio. I found some Amazon stuff (AWS Toolkit for Visual Studio) but, my big problem, that too seems to be made to work with their cloud storage not with my on-prem system. Since our admin mentioned that our S3 on-prem is implemented with NetApp, I wish I could find some "NetApp Toolkit for Visual Studio" to help with my needs described above.

Our API toolkits are very much for managing the systems, not putting data on them - that should be done with the standards compliant interfaces they provide. So we don't provide APIs for putting data on with CIFS or NFS or iSCSI - our systems use those protocols, and the operating system provides the file management facilities that talk over them.

S3 is similar - you should be able to talk to them with the cloudberry explorer, or with the AWS APIs. This is an example hands on lab guide which walks through using Cloudberry with a NetApp StorageGRID system -

LinkedIn and 3rd parties use essential and non-essential cookies to provide, secure, analyze and improve our Services, and to show you relevant ads (including professional and job ads) on and off LinkedIn. Learn more in our Cookie Policy.

What with unpredictable I/O, "Noisy Neighbors" and cutthroat competition nobody was positioning to deliver a deluxe offering. Not any more. Oracle have taken there biggest database servers and provided them exclusively to customers. Yes the Top of the line Exadata Cloud offering - Not to be confused with the rinkydink Exadata Express offering - is a full blown quarter half or full rack with all the equivalent performance memory and Storage bandwidth of the on premise setup.

Because it is exclusively for your use the initial rack provisioning is not automated you have to get it set up but once the rack is in place it takes just minutes to spin up a new instance in its own Oracle home with Full RAC capability and Backup configuration right from the Cloud front end.I will quickly talk you through this procedure and you can see that it is very similar to the other DBCS offerings from Oracle down to the menu options.

The first choice you get when you log into you identity domain is whether you want a normal DBCS database or an Exadata database. Naturally we want the Rolls Royce treatment . Select the Oracle Database Exadata Cloud service and this is the screen we see.

No Surprise here - Exadata supports both 11G R2 and 12c R1. This being a virtualized Exadata Machine we get exactly the same options. Since I am thinking of moving one of my Legacy databases I go for an 11g Database. Note that the Grid on the Exadata is 12c but that's fine as instances can be either of the two versions since Grid version must be equal or greater than the db version.

11.2.0.4 is the terminal release of 11g and is already in Extended support which continues till 2020. If this were an on premise install I would be wondering about support Uplift Fees due in May 2017 but on the cloud its all taken care of for me - so in the immortal words of Bobby Mc Ferrin "Don't Worry be Happy"

This is a very important screen Not only do we choose which Exadata rack to use we also chose the Service name , Sid, Admin Password ,Character set and national character set and also the backup configuration. We were told by Oracle support to create our backup containers before hand and we used the excellent Cloudberry explorer to do the same

After entering the backup container information and choosing to keep backups either only in Oracle Cloud Storage or in Both the Virtual Rack RECO diskgroup and cloud storage we then move on to the summary screen.

Click on Create and then wait a bit as the magic happens in the background. Pretty soon you will find you need Oracle home and Instances are created and you can connect to the Rack and access the instance. Please note that the only material differences between this an an on premise offering is that the number of storage cells has been rationalized so a quarter has 3 a half double that at 6 and a full has 12 cells.

Also since this is a dedicated machine you can setup a site to site VPN if you like . However inorder to setup additional Ports because it is a Paas offering you need to raise an SR you can' directly open any ports you require. I would assume that if you select a quarter rack you are probably getting one quarter of a full rack but through the wonders of infiniband partitioning there is no way to determine this becuase all traffic is isolated at the hardware level.

As a longtime user of Exadata on Premise I find this is a great offering and couples with the Express offering for (say Dev or test ) it easily allows any company to shift their High Performance Databases to the cloud. I also see a good use case for the Exadata Express offering even when production stays on premise because as a DBA I have often heard developers bellyaching that they can't figure out why things work differently on Exadata and they want a test instance on Exadata. well with the Express option I can easily give them one.

c80f0f1006
Reply all
Reply to author
Forward
0 new messages