I also used iSCSI initiator in Winders for a Hyper-V hosted environment. Client bought a honking Dell tower server (16 procs, 256 Gig of RAM), and we loaded 8 virtual machines on it, all on an iSCSI SAN I built out from the Hyper-V HOST. Best I can tell they are still running it, that was like 6 years ago. Just saw one of the owners a couple months ago, they blew up one of their rack servers, he mentioned to me that the server I built was still chugging I assume that meant the SAN as well. Of course I booted from internal RAID 1 drives for the host, and the VMs were all on the iSCSI SAN.
Been using iSCSI on Windows since the String Bean software days. ISCSI targets include EqualLogic, Nimble, and Tegile. The initiator works fine both in guests and Hyper-V hosts (Hyper-V role and Hyper-V Server). Used in both 1G and 10G networks. With and without MPIO. Kinda just works. What are you expecting?
Nimble Connection Manager just helps to configure the connections. If you look at the iscsi control panel, you will see all the same connections that you see in NCM. This leads me to believe that Nimble Connection Manager is leveraging the native Microsoft iSCSI initiator.
When using a server for presenting storage via iSCSI, making sure that it wont end up being a SPOF is golden. If the fault-tolerance of your data let alone the connectivity between partners is main priority, data-replication can be configured rather quickly.
I have been testing this as a cheap SAN solution for our development environment. We have 14 x 300gb 15k drives in an md1000 that came from our exchange server configured in raid 10 i am wanting to share between two qa/uat esx hosts. I am using version 4 of the software but am finding an issue. It seems if the san for whatever reason is restarted the esx servers will not pick it up again once its back online. I tried refreshing the storage pools as well the storage adapter. Even though the target is listed in the storage adapters I have to manually remove the iscsi target's ip, remove the static mappings and then re-add it to storage adapter and re-create the data store. This is troublesome since although the iscsi host should never need to go down you never know when you may need to restart it for something.
Anyone experienced this or have any suggestions? We are using openfiler for other projects but the two issues there, a) lack of good performance monitoring tools and b) no real way to backup and restore in case of disaster (aside from just copying the config). My novice linux skills keeps me from wanting to move something like this into a rock solid environment,. Our openfilers have done well since I set them up. We have one serving 1.2tb been up for 217 days and one serving 800gb been up for about 180 days so they are solid and stable but still have reservations of putting into an environment that requires higher uptime. Our qa and uat environments are available to external clients so i need something I am confident I can restore easily in case of troubleshooting or sudden unavailability.
Edward L. Haletky VMware Communities User Moderator, VMware vExpert 2009, Virtualization Practice Analyst[/url]
Now Available: 'VMware vSphere(TM) and Virtual Infrastructure Security: Securing the Virtual Environment'[/url]
Also available 'VMWare ESX Server in the Enterprise'[/url]
[url= _Roll]SearchVMware Pro[/url]Blue Gears[/url]Top Virtualization Security Links[/url]Virtualization Security Round Table Podcast[/url]
Could you explain little more on the 'if the san for whatever reason is restarted'?. If it is just a restart and after the re-start, if the target is reachable, it should be picked up by ESX automatically. I suspect, the target may be sending some status that is either confusing the initiator or making the initiator think that target doesn't exist anymore.Even in this case, a manual rescan from VI client should bring the target and data storare online.
First of all, I appreciate this isn't a 100% real life comparison given that all the components of my VSphere and ISCSI environment get shutdown when I'm not using them. However, it does seem odd that after a reboot connection to the ISCSI volumes is lost.
A work around! Its clunky, but it works.
When you lose the connection to the device, remove the device from Starwind (Right click, remove)
Add a new device to the connection, choosing the option to remount an existing image.
Remount the image and name it exactly as it was named previously (before you removed it)
Rescan the adapter again in VSphere client and the ISCSI volume will show up again and the datastore will once again be accessible.
In testing, i find the Starwind actually works VERY well.. I have used a VIrtual STARWIND server on a older esx server, and performance was o..k... Meaning, it rocked... Not all the bells and whistles,, yet for a virtual iscsi san, performance was very well.
as a stand alone server, i use a white box q9550, 3gb ram, w\14x 147gb 15k drives.. And it rocks.. the UI, is very basic.. and not a lot of thrills, but then again, main job is the deployment of storage.. i am so looking forward to 5.0.. Can easily hit 100% Nic utilization.. I/O is very decent... In testing i can at a whim reboot 23+ virtual machines, and performance is expected...Next goal will be to try 3x 128gb SSD drives via the Starwind..
7fc3f7cf58