Blackmagic Disk Speed Test Mac Direct !!INSTALL!! Download

0 views
Skip to first unread message

Cheyenne Reinecke

unread,
Jan 20, 2024, 7:53:38 PM1/20/24
to mispramawan

So I recently bought a Mid-2012 MacBook Pro (2.6 Ghz quad-quad core) that was maxed out with an SSD and 16GB of Ram. It feels blazing fast and my Blackmagic disk speed test results measure around 400-450 Mb/s for both read and write. This seems on par with most youtube video results who have a similar setup after upgrading their Mid-2012 MacBook Pro. (Some RAID 0 setups have speeds around 900 Mb/s from what I've seen on youtube as well)

Hi Adrian,

> First thing to note is that 4x 10Gbps does not make 40Gbps....all it does is to make NIC teaming into 4x 10Gbps channels (if your OS have the respective NIC teaming software).

I wanted to determine the unit's overall performance under SMB and not just that of an single 10G link.

> Then what is the NIC cards on your NAS ? I would think that most NAS have NIC teaming solutions out-of-the-box.

The NAS has 2 x 1GB (RJ45) & 2 x 10G (SPF +) onboard and in addition I have installed two expansion cards with 2 x 10G RJ45 each. In the above test, no LAG is configured on the NAS, not even on the workstation.



> Then Is the NAS that using 8x16TB on SATA HDD ?

The uper test runs on the unit with 8xSSDs, there is no way to hit > 4.000 MB/s with 8xHDDs.

> What it means for NIC teaming or NLB (ignoreing the SATA write speeds).....
> - if you transfer a huge file like a 10TB file, only 1 10Gbps NIC will be used...for example it may take 10 minutes to transfer the file.
> - if you transfer many smaller files (100 files, total like a 10TB), different files will use different 1x 10Gbps NIC...for example it may take 3 minutes to transfer the files. But this is limited if the application is able to use multi-threads (like Robocopy) or multi-path (like iSCSI protocols).

Adrian, I know the rules of Multipath and LAG quite well and under normal conditions I would agree with you 1: 1 at this point.
But my workstation is currently doing "multipath" when transferring a single file, although this shouldn't work at all. That's why I opened this post.

Regards from Germany

Alex

blackmagic disk speed test mac direct download


Download Zip ✶✶✶ https://t.co/VOuJJ4IO5T



Hi Alex,

> You need to setup iscsi connections and mpio to be able to write at 40G.

I want to determine the file level performance of the NAS-Unit during the test and not the block level.
The block level one comes later.

> otherwise your max write speed will be 10G for each file transfer

In my opinion it would be a waste to brake the case, just buy an external USB hard drive for that. For the My book performance be sure to have the latest firmware installed and check the link below for some recommendations on how to improve the transfer speed.

It's incredibly easy to use. Disk Speed Test writes large chunks of data to your chosen disk and then reads that data, giving you a real-world read/write speed in MB/s. The program then tells you what kind of uncompressed video that drive will be able to handle and allows you to save the results as a screenshot.

As you can see from the image above, my magnetic hard drive-equipped MacBook Pro isn't going to win any speed awards. It also couldn't handle anything above uncompressed SD video according to the app -- but then again that's not the sort of thing I would even dream of trying. If you're looking to capture uncompressed video direct to a disk, Disk Speed Test will give you an indication of whether it's going to be up to the job.

So, if you're curious about your hard disk speed, regardless of whether it's just a simple magnetic hard drive, an internal SSD, a network mounted disk array, or even a beast of a Thunderbolt SSD drive -- Disk Speed Test will quickly and easily answer that for you with just one click.

So I have my internal HDD, HGST HTS721010A9E630, and my Samsung T3 500GB SSD. I ran a speedtest on both drives and I got a result of 121MB/s write and a 129MB/s read. On my external SSD connected via my monitors USB hub I get a 362MB/s write and 404MB/s read. Is this normal, like I know SATA3 bottlenecks. I also tested my external 3TB HDD connected via USB 3.0 and I got 144MB/s write and 151MB/s read, and for lolz I tested my NVME SSD... 948MB/s read and 1410MB/s write. I did spike to 1200MB/s on both but it slowed due to the way Blackmagic's disk speed test works.

Your SSD probably used 4 pci-e lanes to communicate with the computer, giving it a maximum speed of 2 GB/s or 3.9 GB in both directions. The benchmarks show 1400 MB/s when reading from SSD which indicate the use of at least 2 pci-e v3.0 lanes, or 4 pci-e v2.0 lanes.

However, note that for some storage media the size of the file is not as important as total bytes written during short time period. For example, some SSDs have significantly faster performance with pre-erased blocks or it might have small SLC flash area that's used as write cache and the performance changes once SLC cache is full (e.g. Samsung EVO series which have 20-50 GB SLC cache). As an another example, Seagate SMR HDDs have about 20 GB PMR cache area that has pretty high performance but once it gets full, writing directly to SMR area may cut the performance to 10% from the original. And the only way to see this performance degration is to first write 20+ GB as fast as possible and continue with the real test immediately afterwards. Of course, this all depends on your workload: if your write access is bursty with longish delays that allow the device to clean the internal cache, shorter test sequences will reflect your real world performance better. If you need to do lots of IO, you need to increase both --io_size and --runtime parameters. Note that some media (e.g. most cheap flash devices) will suffer from such testing because the flash chips are poor enough to wear down very quickly. In my opinion, if any device is poor enough not to handle this kind of testing, it should not be used to hold any valueable data in any case. That said, do not repeat big write tests for 1000s of times because all flash cells will have some level of wear with writing.

Note that fio will create the required temporary file on first run. It will be filled with pseudorandom data to avoid getting too good numbers from devices that try to cheat in benchmarks by compressing the data before writing it to permanent storage. The temporary file will be called fio-tempfile.dat in above examples and stored in current working directory. So you should first change to directory that is mounted on the device you want to test. The fio also supports using direct media as the test target but I definitely suggest reading the manual page before trying that because a typo can overwrite your whole operating system when one uses direct storage media access (e.g. accidentally writing to OS device instead of test device).

I would not recommend using /dev/urandom because it's software based and slow as pig. Better to take chunk of random data on ramdisk. On hard disk testing random doesn't matter, because every byte is written as is (also on ssd with dd). But if we test dedupped zfs pool with pure zero or random data, there is huge performance difference.

This is useful to get information about how a disk performs for a particular application or workload. The output will show you read/write speed per process, and total read/write speed for the server, similar to top.

AJA System Test by AJA Video Systems measures system disk performance using video test files of different resolutions, sizes and codecs. We selected the standard 4K UltraHD resolution, a test file size of 1GB and the ProRes 4444 codec to see how this drive does with regard to video rendering performance. ProRes 4444 is used for heavy effects work and deep color projects. We changed the default test file size from 1GB to 16GB to better represent real-world usage. The results from this benchmark help content creators with regards to rendering expectations, throughput, file read/write expectations for transfers and that kind of thing.

Benchmark Results: The AJA System Test results had the Seagate One Touch 1TB drive topping out at 344 MB/s read and 239 MB/s write. The write disk performance pattern showed that write performance dipping below 70 MB/s during the long sustained write test, so the direct to NAND write speed is fairly low on this drive.

Taking into consideration that this NAS has no 10G option, but rather a dual 1G connectivity, one standard speed test would also include network transfers. In the following example, a single 72GB file is copied from one NAS to the DS224+ using a single 1G lane.

I checked several options for a replacement (i.e. Transcend Jetdrive 820 or the most recent 850) but I am worried that disk write and read speeds will be negatively affected. With the Apple SSD my Macbook scores 1033 MB/s and 1950 MB/s at the Blackmagic Disk Speed Test.

All of these tests leverage the common vdBench workload generator, with a scripting engine to automate and capture results over a large compute testing cluster. This allows us to repeat the same workloads across a wide range of storage devices, including flash arrays and individual storage devices. Our testing process for these benchmarks fills the entire drive surface with data, then partitions a drive section equal to 1% of the drive capacity to simulate how the drive might respond to application workloads. This is different from full entropy tests which use 100% of the drive and take them into a steady state. As a result, these figures will reflect higher-sustained write speeds.

But as resolution and speed increase, as Sony will probably adopt direct stream tech a la Z9 for true live EVF they will have no choice but to change their architecture and to go towards the fastest possible cards, meaning type B.

df19127ead
Reply all
Reply to author
Forward
0 new messages