Want to upgrade my old workstation xw4600 with a 550w power supply. But can never boot. Heard that the 24pin on the motherboard is different from normal PCs, can anyone provide the definition of this 24pin? Or where can i find a suitable power supply?
Hi thanks a lot! Is it possible that there is a ATX-size power supply that support this workstation? Im shifting its motherboard to a new case, and the original one is too large. And those on ebay seems to be the same size as mine. And i cannot find the pin specs on
Some people are having trouble with an MTU of 9000. I suggest leaving the MTU at 1500 and make sure everything works there before testing an MTU of 9000. Also, if you run into networking issues look at disabling TSO offloading (see comments).
Create a new VM, choose custom, put it on one of the drives on the SATA ports, Virtual Machine version 11, Guest OS type is FreeBSD 64-bit, 1 socket and 2 cores. Try to give it at least 8GB of memory. On Networking give it two adapters, the 1st NIC should be assigned to the VM Network, 2nd NIC to the Storage network. Set both to VMXNET3.
If all looks well shutdown FreeNAS (you can now choose Shutdown Guest from VMware to safely power it off), remove the E1000 NIC and boot it back up (note that the IP address on the web gui will be different).
System, Certificates, Create Internal Certificate. Once again bump the key length to 4096. The important part here is the Common Name must match your DNS entry. If you are going to access FreeNAS via IP then you should put the IP address in the Common Name field.
The Description will show up in FreeNAS and it will survive reboots. it will also follow the drive even if you move it to a different slot. So it may be more appropriate to make your description match a label on the removable trays rather than the bay number.
I have on numerous occasions had the Log get changed to Stripe after I set it to Log, so just double-check by clicking on the top level tank, then the volume status icon and make sure it looks like this:
If you have a second FreeNAS Server (say stor2.b3n.org) you can replicate the snapshots over to it. On stor1.b3n.org, Replication tasks, view public key. copy the key to the clipboard.
The goal is on power loss, before the battery fails to shutdown all the VMware guests including FreeNAS. So far all I have gotten is the APC working with VMware. Edit the VM settings and add a USB controller, then add a USB device and select the UPS, in my case a APC Back-UPS ES 550G. Power FreeNAS back on.
This is a really great post that gave me some good tips. There is great overlap between the hardware we use.
In an all-in-one like this, do you see any benefits to using vSphere 6 over 5.5? I use a combination of the desktop client and the command line to manage the VMs and thus run vmx-09 instead of 11.
At first I thought iSCSI was going to be the ticket and it certainly was blazingly fast running the ATTO benchmark on a single Windows VM. But it turns out that it breaks down badly under any kind of load. When running the ATTO benchmark simultaneously on both Windows VMs I get this sequence of error messages:
Hi, Keith. Thanks for the update. I did not run into the iSCSI issue but I mainly run NFS in my environment. From that bug It looks like there is an issue with iSCSI but here are a few ideas to check on NFS performance:
RE: 2> I tried increasing the NFS servers from 4 to 6, 16, and 32, again with no real difference in outcome. Adding servers only seems to delay the point at which NFS breaks down and the ATTO benchmarks stall.
In addition, I created my FreeNAS VM using version 8 instead of 11. Not sure that version would matter. I did run into issues with the pkg_add compat6x-amd64 and perl, but changed the PACKAGESITE to -archive.freebsd.org/pub/FreeBSD-Archive/ports/amd64/packages-9.2-release/Latest/
One think you might try is going to an older version of VMware 6 or older version of FreeNAS (not to run in production obviously, just to troubleshoot). Also, see if the vmxnet3 drivers work with a normal FreeBSD 9.3 install.
On a side note, have you tried adding an addition physical NIC to your system and running a lagg for Storage in FreeNAS? In my old FreeNAS box I had 1 for my management & 2 in a lagg for my storage, and was hoping to do the same with this setup.
Thanks for the confirmation of the issue Richard. I remembered at work we had to disable segmentation offloading because of a bug in the 10GB Intel drivers (which had not been fixed by Intel as of Feb at least), and may be the same issue on the VMXNET3 driver. See my comment above responding to Keith and let me know if that helps your situation at all.
Just for everyones benefit and clarity, I am running FreeNAS on an HP xw4600 workstation with a Quad Port Intel card using the igb driver. I also have link aggregation configured however I am only using fault tolerant teams, so this should be safe.
Horrible performance :) So started reading your guys comments. And all the discussion on the freenas forums you linked too. I got a bit confused as to what your actual conclusions where? Am I right in assuming that the following is what you came up with?
The problem with ESXi on NFS is it forces an o_sync (cache flush) after every write. ESXi does not do this with iSCSI by default does not do this (but it means you will lose the last few seconds of writes if power is lost unless you set sync=always which then gives it the same disadvantage as NFS).
Hi, I found your blog from Google search and is reading few posts now. It happened that you have quite similar All-in-One system like mine, except I use all SSD environment (I also setup VMXNET3 using binary driver so actually we are pretty on the very same road of finding optimized way of setting up the system). I dont want to be locked into any kind of HW RAID so ZFS is my choice.
Basically I think FreeNAS 9.3 is not good for production as it has lot of trouble with kernel target. We might need to wait till FreeNAS 10. At few first update, FreeNAS 9.3 is really buggies. I still dont know why my FreeNAS has quite low write speed (if I fire up other SAN VM and handover the SAS controller + my pool to it, it has better write) (all use same setting: 02 vCPU, 8GB RAM, VT-d LSI SAS controller, ZFS mirror with 04 SSD)
Hi, Dave. Yeah, for that $100 price difference the S3710 is the way to go. Sometimes the supply/demand gets kind of odd after merchants have lowered their inventories of old hardware which is what appears to be happening here.
This brings things back to life for me. I never experience what you are describing on boot. But perhaps something is going on in the background where your mount is coming online disconnecting and then coming back online again causing ESXi to put the share into an inaccessible state.
Hi folks,
I do have a Microserver Gen8 and I would like to experiment with sharing storage back to hypervisor. As this is a home lab, I do not have 2 x good SSDs a nad a rack machine I could put a lot of other harddrives as well.
Microserver has four SATA connected to B120i and a single SATA for ODD.
The whole point of running this Hybrid ESXi / freenas setup is to run the most VMs you can on ZFS storage. Why ZFS ? You have checksums on every data block, you can setup weekly scrubs to make sure your data is not corrupt, you can enable vmware-snpashot feature in freenas and then run snapshots on your ZFS dataset that you provide to ESXi so you have a filesystem consistent, and some times even application consistent snapshot that you can restore to. You have LZ4 compression, LZ4 will save you on storage capacity costs and it even make performance higher in some scenarios, there are MANY benefits to running on ZFS than running on hardware raid 1. You just have to make sure that you set a DELAY in ESXi after freenas boots so that the NFS storage is available by the time your other VMs boot from that NFS storage.
When I lost the DOM I was able to restore ESXi and boot OmniOS off the mirrored rpool. Worked like a charm. But I had some downtime until I was able to restore ESXI. I noticed during this that my motherboard supports software raid. In your opinion would it be a bad idea to just Software Raid the drive that holds OmniOS and everything else or is that a bad idea?
If you decide to go Supermicro you have quite a few options. You can buy a pre-built server like -5028D-TN4T.cfm or you can build it yourself like I did. See:
-x10sdv-f-build-datacenter-in-a-box/
Hi Ben,
I ordered MS8 with G2020T Pentium. This configuration was the cheapest on Black Friday :) So im starting with 10gb ram and 2x2TB WD reds, no vt-d.
Do you think that 10gb ram is sufficient for 2 VMs with FreeNas?
Price and availability in your country is certainly a consideration. If something were to break it would be easier to source a part from the vendor in your country instead of having it shipped internationally.
I did record a sequential write benchmark on my -vs-omnios-napp-it/ Using two 100GB DC S3700s striped for ZIL looks like I got around 125MB/s with NFS on FreeNAS OmniOS almost doubled that. OmniOS seems to get much better striping performance. With a single DC S3700 for ZIL the write speed was barely under 100MB/s for both FreeNAS and OmniOS. -vs-bhyve-performance-comparison/
Performance aside, it turns out Intel and LSI go together hand in hand. Someone on the web mentioned Intel SSD controller has been designed by LSI. LSI HBA/Raid Controller work better with Intel than Samsung or any other drives.
Yes. It seems as if the Freenas community is not interested in supporting VMDKs (physical RDM) on ESXI to acceptable performance levels and continue to uphold anal requirements for greater control and visibility on raw physical disks via HBA.
What may even work better with VMDKs is a Linux distro that supports ZFS like Debian or the upcoming Ubuntu 16.04 LTS, because they can use the Paravirtual disk driver which is even more efficient than the SCSI driver.