I am looking to expand a virtual volume fromt about 350GB to 2.1TB.
Allthough it looks quite straightforward, something about the free space available is not quite clear.
Here is an overview of the current situation ( currently I cannot add images yet):
Pool B sates 51 GB free, of which 7106 GB unallocated, divided over 2 disk groups.
Each fisk group has 3580 GB free.
I have a virtual volume of 350 GB which I would like to increase to 2.1TB.
I was thinking, there is 7TB free (read unallocated), so that should be ok.
Optional: In the Expand By field, enter the size by which to expand the volume. If overcommitting the physical capacity of the system is not allowed, the value cannot exceed the amount of free space in the storage pool.
Does this mean that I cannot extend the virtual volume because there is only 51GB left in the pool?
Or if I do so, it would mean that I am overcomissioning space?
Meaning, I will take space that is initally allocated to other volumes ( but not used yet aka free space in the different volumes).
This would also mean that if my extended volume gets full, some other volumes will lack space at some point in time?
Thanks in advance for clarifying and helping me out.
Kind regards.
First you need to understand what is "overcommit" in MSA. If you enable this option in Pool which means irrespective of physical capacity you are telling your volume can keep on write data till the size you have defined as volume size. This is nothing but Thin volume which starts allocating space when demand of new write comes. So if you have overcommit not enabled at the pool level then whatever physical capacity available in that pool your volume can't cross that boundry. With thin provisioning, the administrator can create a very large volume, up to the maximum size allowed by host operating system where this volume presented. From Storage perspective, The physical capacity limit for a virtual pool is 512 TiB. When overcommit is enabled, the logical capacity limit is 1 PiB.
Hope now it's clear why this line mentioned "Optional: In the Expand By field, enter the size by which to expand the volume. If overcommitting the physical capacity of the system is not allowed, the value cannot exceed the amount of free space in the storage pool."
When you consider physical space then you need to check Virtual Disk group only. If 2 VDG shows 7106GB unallocated which means volumes can go upto that much in current condition provided you enabled overcommit and keep increasing volume size to that level or may be more and keep adding more VDG as well.
Thanks Al. Support told me that the ViVol NDrive cannot be deleted until it is emptied out first using "virtual-volume remove" command. So now trying to delete everything via a CIFS mount. Very time consuming and I run into permission issues I can't get past. Has to be another way!
I setup a virtual volume device to control my racked audio equipment. I want to be able to trigger things from the virtual volume device but I cannot find anyway to expose volume up and down in the rule machine?
Is there any way I could control the volume control with the remote? Normally tvOS should have the ability to control the volume of the operating system as well to cover this case. Do you know if there is such an option in the settings?
Does anyone know of a cli command or script that will populate all the virtual volumes (with their UUID) on the VPLEX regardless of whether belong to a storage-view or not?. I have been doing ls -f on the storage view to get the UUID details of the virtual volumes that belong to that storage view, but how to find UUID of virtual volumes that does not belong to any storage view?. All in all, how to get UUID of ALL the virtual volumes on the VPLEX using a single command. Thank you!
The "-t" tells ls to list the specified attribute (virtual-volumes) of the object view_name. Notice the double colons separating the object from its attribute.
Remember that a storage view won't export a virtual volume until the storage view contains a minimum of a VPLEX FE port, a host initiator port and a virtual volume. So you could see the UUID without actually exporting the virtual volume to a host.
Thanks Andrew. In that case, is there a VPLEX CLI command with a wildcard i could use to populate a list of ALL virtual volumes with their VPD ID that belong to any storage-view?. Basically, i have the VPD ID of a virtual volume and i have to hunt down the storage-view to which it belongs to. Obviously i dont want to manually check every storage-view for it. Please assist. Thanks.
I have played around with macOS Big Sur for a few weeks but I decided to downgrade to Catalina. Big Sur handles volumes a bit differently and creates a virtual volume - Data volume. The interesting part that this multiple virtual volumes situation stays after the downgrade too, and now I have 2 similarly named volume (MacBookPro) and an Update volume which seemingly another virtual one).
I have one DELL Powervault ME4012 connected to two host servers (DELL PowerEdge R740 on Ubuntu 18.04) using 12 Gbps SAS HBAs. The ME4012 storage is configured as one virtual volume with RAID type ADAPT. The volume is mounted on both host servers (EXT4). When i write a file on the mounted volume from Host A, the file does not appear on the volume mounted on Host B. The file shows up on Host B Only after I un-mount and then re-mount the volume on Host B.
I have tried changing the virtual volume cache setting using ME4012 management interface from default "Write-back" to "Write-through" to no effect. So what else can i try to make the files written from Host A visible on Host B immediately.
Like if we open the same directory in two separate windows and create a new folder in window A, the same folder appears instantly on window B as well, because it is the same directory open in two separate windows. I expect the same to work across two servers because the underlaying directory/volume is still the same.
You must not mount an ext4 volume concurrently on two machines. Not seeing data written on one host on the other host is one of the more harmless results. You will also see data corruption, one host overwriting data the other host has written, and worse.
You should never mount a simple EXT volume on two different hosts. You are going to cause all kinds of issues. If you need access to those files from multiple machines, connect them to a host and share them using SMB/NFS.
Generally, this volume will be called pure-protocol-endpoint, which is the automatically created one. Though some users might create their own (or rename this). You can tell if you have one or more PEs presented to a host if the volume name is black and not clickable.
I have a question regarding MariaDB and Docker. Is it wise to use the volume that is already provided with the official MariaDB-Docker-image? Or is it better to create a folder that is shared with the host for better performance? One of my colleagues was afraid that read / write operations could be too slow in the virtual volume.
Use volumes for write-heavy workloads: Volumes provide the best and most predictable performance for write-heavy workloads. This is because they bypass the storage driver and do not incur any of the potential overheads introduced by thin provisioning and copy-on-write...
From the VVols page, available from the side panel for a selected cluster, you can view information about virtual volumes and their associated storage containers, protocol endpoints, bindings, and hosts.
I have closed every application, and even LOGGED off and back on, but the virtual disk is still present. I can only unmount by restarting windows or by physically disconnecting the drive holding the .pgd image. (Note: Disabling the AV on-demand process does not resolve this issue).
When moving the PGP-Disk Container File to another NTFS volume, the security settings of this particular file will be replaced by the security settings of the new NTFS drive that the PGP-Disk is located on.
However, PGP didn't seem to recognize properly that the file was read-only (probably because the PGP version was too old to work well with Windows 7), instead it treated it like a writable PGP volume, but when you try to dismount it, the NTFS driver wants to write something on the PGP disk and thus the error message appears. When using FAT32 PGP disks, the dismount doesn't require writing data to the PGP disk, so the dismount will work.
Meeting strict business SLAs for performance, managing rapidly growing production databases, and simultaneously reducing backup windows and their impact on system performance often force DBAs to delay virtualization of business-critical databases and workloads. Frequent demands for database cloning and refreshing further complicate matters.
An RDM is a mapping file in a separate VMFS volume that acts as a proxy for a raw physical storage device. With the RDM, a virtual machine can access and use the storage device directly. The RDM contains metadata for managing and redirecting disk access to the physical device.
You can use RDMs in virtual compatibility or physical compatibility modes. Virtual mode specifies full virtualization of the mapped device. Physical mode specifies minimal SCSI virtualization of the mapped device, allowing the greatest flexibility for SAN management software.
To store virtual disks, ESXi uses datastores. The datastores are logical containers that hide specifics of physical storage from virtual machines and provide a uniform model for storing the virtual machine files.
The datastores that you deploy on block storage devices use the native vSphere Virtual Machine File System (VMFS) format. It is a special high-performance file system format that is optimized for storing virtual machines.
VMware vSphere Virtual Volumes, also known as vVols, virtualizes storage devices by abstracting physical hardware resources into logical pools of capacity. The Virtual Volumes functionality changes the storage management paradigm from managing space inside datastores to managing abstract storage objects handled by storage arrays. With Virtual Volumes, an individual virtual machine, not the datastore, becomes a unit of storage management, while storage hardware gains complete control over virtual disk content, layout, and management.
582128177f