I'm trying to setup a RAID 5 array. I've got my motherboard configured to use Intel RST. My 3 new drives all show up in Storage System View in the Intel Optane Memory and Storage Management app.
However, the option to "Create RAID Volume" is grayed out.
I've attached the system diagnostics file and a screenshot.
We are going to work on this issue, in the meantime, could you please confirm if you have enabled RAID in BIOS as well? In case you have not, you may be able to do that by following either of these methods:
The intel drive and support assistant tells me that I have an update available: for Intel RST Driver. The new version is 16.8.3.1003 dated 3/19/2019. This update fails to install. When this update attempts to install I get a warning that I already have a new version installed. Logs of install are attached in case that might help.
I did check the previous post in regards to selecting drives in the BIOS via the spacebar in order to add to a raid array. With my BIOS when I've got the drive highlighted if I select the spacebar nothing happens. Maybe my BIOS works differently, I have no idea.
Ah, this makes more sense. This is just a BIOS display scene, not the actual RAID console. Once you have the SATA Mode parameter set to RAID, you should be able to do a during BIOS POST and enter the actual Intel RAID BIOS Extension.
Intel does not verify all solutions, including but not limited to any file transfers that may appear in this community. Accordingly, Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.
I have 3 new 2 TB SATA hard drives in a fresh high end system build. Before creating the array, the 3 drives were benchmarked individually, each averaging apx 220mb/s read and 200 mb/s write (sequential, crystal diskmark). This was to verify the drives performed to spec.
However, when I created the RAID 5 array in the Optane software, initialization is EXTREMELY slow. I know this is a process that takes a while normally. However, after 16 hours it is only at 7%. At this rate I'm looking at nearly 10 days for initialization.
In order to furnish you with precise information in response to your queries, I will undertake additional research. As soon as I have obtained the requested information, I will promptly communicate it to you.
We greatly appreciate your patience and sincerely apologize for the delay in our response. Your inquiries have been diligently forwarded to our specialized RAID department, ensuring that we provide you with the most precise and comprehensive information. Below, you will find their responses:
We sincerely apologize for the inconvenience caused by our previous response. Unfortunately, providing a specific timeframe is challenging due to the multitude of factors involved, as previously mentioned.
The time required to create a RAID setup is influenced by a wide array of variables, including the number and capacity of drives, the type of RAID controller or software being used, drive speeds, RAID level and configuration, background processes, RAID controller cache, file system format, and more.
Even when estimating in an ideal scenario where both read and write processes occur simultaneously and efficiently, the total time for RAID 5 creation is approximately 39,098.18 seconds, roughly equivalent to 10.86 hours. However, please note that this is a rough estimate, and the actual duration can fluctuate due to factors such as the efficiency of your RAID controller, system workload, and potential bottlenecks.
However, there is a significant difference between 10.86 hours and 10 DAYS. No degree of variation in those variables justifies that, particularly considering most of the items listed are not variables at all. Here we know the controller, drive speeds, raid level, cache, etc. That only leaves background processes and file system format.
Background processes shouldn't really matter since you already said the process is almost entirely handled by the controller. Regardless, the computer was effectively idle for the vast majority of those 10 days. Furthermore, the drives were (and still are) completely empty with zero utilization. As for file system, all were default (NTFS) settings in windows 11. It's also worth reiterating that this is a brand new high end system using Intel's supposedly top of the line PC chipset.
I'm going to stop following this thread now. I did not expect there to be any immediate solution, but I also did not expect a flat out denial that there was even anything abnormal about the situation.
I apologize for any frustration or confusion caused. Since you've expressed your decision to discontinue the discussion on this topic, I will respect your request and close the thread. If you ever have further questions or need assistance in the future, please don't hesitate to reach out. Have a great day!
Another potential reason for slow RAID 5 initialization is if the system is performing background tasks or disk checks during the initialization. This could impact the speed. MOFA Attestation is crucial for ensuring the integrity of data, especially in RAID setups.
It's perplexing to witness RAID 5 initialization dragging at a snail's pace on Z790. It's akin to dealing with a persistent pest control issue, requiring meticulous attention to ensure optimal performance in both scenarios.
We've been running an install of Splunk for approx 3.5 years now (originally starting with a Splunk 2.0 install and continuously migrating forward), and we're finally hitting a point where we'll be able to reconfigure our storage setup for Splunk in the next few weeks. The hardware that we have/will be working with is as follows:
RAID 10 is going to give superior performance to RAID5 and RAID6 in almost every workload. You don't have reads to recompute parity for every write, and you have more potential spindles from which to complete a read. See -well-does-a-indexer-configured-w-raid-5-or-6-perform for additional info.
Another general rule is that more memory in your indexer node is going to make for better performance. If you can put 64GB (or more!) of memory in your indexer and let most of it be used for the OS filesystem cache, that will help.
How you configure your hot/warm/cold bucketing in Splunk can also affect search performance. One big /opt/splunk filesystem is probably not ideal. It might be worth your effort to take a small set of your drives and put them aside into a smaller RAID group and keep your hot buckets there while putting your warm/cold buckets in the larger RAID group - that way you don't have disk contention between your indexing of new data and your searching of older data.
Also, consider distributed search. 20GB / day is on the smaller side for distributed search, but the partitioning of your dataset across multiple machines could make search substantially faster. See -a-server-or-two/
Finally, I would expect searches across a narrow range of time to still happen relatively quickly. If you are having searches over a narrow time range that still run for a long time, you may not have your bucketing configured optimally (eg, each bucket contains a large time range of data). You might try the dbinspect search command ( ) to check the min and max timestamp in each bucket -- and it might not be a bad idea to contact support and discuss this with them.
Next, check to make sure your search is optimal. Use the Search Profiler to identify the bottlenecks. If you are using flashtimeline for reporting, stop! Use charting, and if speed is paramount uncheck the "enable preview" checkbox.
To answer your question, here is my opinion based on several benchmarks that I performed last year. RAID 5 and RAID 10 offer "more or less" the same read performance. Sorkin correctly points out that there is a reduction in aggregate throughput which can affect RAID 5 reads, thus the "more or less". RAID 10 offers better write performance and thus better concurrency in read/write operations. RAID 5 provides more usable space than RAID 10.
It shouldn't matter to much unless you have very precise search speed requirements and/or are indexing excessively (>=100GB/day/indexer). Faster disks are another performance consideration for all kinds of searches, from sparse to dense. Faster cores will make search faster, while more cores will provide greater search concurrency. Distributing across more Splunk servers is the way to go, as dwaddle mentioned.
7fc3f7cf58