Usingsystem memory as storage has its pros and cons. One of the benefits is obviously the performance. Storage-bound applications greatly benefit from the extreme read and write speeds, not to mention storage latency being virtually eliminated.
There are plenty of free and paid options on the web for RAM disk software. One free, open-source option would be the popular ImDisk. There is no limit on how large the RAM disk can be (besides the actual amount of physical RAM) and it is easy to use. Another free software, the one that was used in our testing, is OSFMount by PassMark. OSFMount also has no limit to the size of the RAM disk. The drive can be formatted in NTFS, exFAT or FAT32, and can be seen as a physical or logical drive. A third free software is from AMD and is called Radeon RAMDisk. The free version only allows up to 4GB, enough for storing temporary files, but paid versions go up to 64GB. Another paid software called Primo Ramdisk has many tiers suiting various needs. There is a free trial and the cheapest option allows up to a 4GB RAM disk, with higher tiers offering up to 1TB.
We are going to start managing the IT for a local environmental consultant. One of the projects they want us to work on is a new PC upgrade for the users. They use ArcGIS Pro, which is new to us (the MSP). From my research, and from talking to our new client, it looks like they are considered to be heavy users of ArcGIS Pro. We know that they tend to use spatial analysis often, which looks like can be accelerated using a decent GPU. If we knew more about ArcGIS Pro, we would include more examples. We think it's safe to assume that they need hardware that can handle all of the different types of workloads ArcGIS Pro is capable of producing.
For question 1, we believe the answer is 1A (a faster clock speed as ArcGIS Pro is mostly single threaded). However, we don't know if there are certain use cases where 1B or 1C would be better answers. If so, which use cases?
For the NVMe drive. We were planning on using a 1TB Samsung 980 Pro for the OS, and a 1 TB Sabrent Rocket 4 Plus that will only store the ArcGIS Pro files. We're planning on putting the Sabrent drive (ArcGIS files) in the M.2 slot directly connected to the CPU, and the Samsung drive (OS) in the M.2 slot connected to the PCH. We figure this would be the best way of gaining local disk performance.
We're not asking for advice on computer hardware, as much as we're asking how ArcGIS Pro utilizes the resources and how to best optimize the hardware for this GIS software... And to figure out at which point does diminishing returns start to happen?
And I know that will depend on usage. However, we have never used ArcGIS Pro before to know what to look for specifically. Any guidance would be helpful in that realm.
If ArcGIS Pro isn't resource hungry under any circumstances. Then I guess this post would seem like a funny question. However, it does look like there are certain circumstances when having the right hardware for ArcGIS Pro matters.
For most daily basic operations this works fine. But when I try calculating thousands of entries at a time it is bogged down, or the further I zoom out and the more that is on my screen it really goes down hill. Especially if I have large Rasters visible.
Can you re-run that test and see if Task Manager reports either the CPU, SSD/HDD, RAM, or GPU as using close to 100% utilization? From what you are telling me, I'm guessing it probably doesn't go beyond 20% utilization. If not, please let me know which component was utilized the most.
GIS will take you on a journey that can begin as a solitary desktop endeavor and through a universe (multi-verse) of possibilities including single layers of data on a local machine to terabytes/petabytes layers and 3D imagery to cloud resources, enterprise geodatabases, and back.
Thanks for the wiki link. I actually have skimmed through it already. It has some good pointers, especially in a VDI environment. However, I don't see the answers to questions 1, 2, or 3 in my original post.
When I import an audio file into Cubase 12 and have it on a loop cycle then when I goto the top to change the Transpose, Fine Tune or Musical Mode ON/OFF it will trigger the disk cache to overload. I have tried this on a empty project and it also produces the same problem.
I have tried testing on my projects and also tested it on an empty project and the problem happens regardless. Importing an audio file and playing it on a very small loop cycle while changing the Transpose, Fine Tune or Musical Mode will trigger disk cache to overload straight away.
I expected to write this blog AFTER all my virtual machines were migrated to the new hardware, but I am impatient, and I am recording enough interesting data that one big blog post would likely be really, really long.
This server runs an Octoprint server for my Sovol SV06, a Home Assistant server, a NAS that syncs an extra copy of my data from my off-site Seafile server, the staging and publishing server for our blogs, and a handful of less critical virtual machines.
If your mini server needs to store a lot of data, that is a compelling feature. You could install a cheap pair of 12-terabyte hard disks and put them in a mirror, or you could skip the redundancy and install a pair of 22-terabyte hard disks to have yourself a dense little 44-terabyte monster.
This particular Beelink mini PC has been at the top of my list the entire time I have been contemplating this. It would be a fantastic choice, and at this point in time, it would have been a better choice for my own homelab.
These go on sale for not much more than $200. They have double the single-core performance and nearly twice as much multi-core performance as my FX-8350, and best of all, you can upgrade them to 64 GB of RAM for around $100.
The only bummer to me is that they are a little older now, so they only ship with gigabit Ethernet, while many newer mini PCs ship with 2.5 gigabit ports. This is also one of their advantages, though, because the DDR4 SO-DIMMs are still a good bit cheaper than newer RAM.
This tiny CPU with a 10-watt TDP still packs a pretty good punch. It has the same single-core performance and 2/3 as much multi-core performance as my old AMD FX-8350, but being such a modern processor, the N5095 can manage to push AES encryption almost twice as fast. For reference here, my 12-year-old FX-8350 build pulls over 200-watts from the wall when it maxes out Tailscale speeds.
The Celeron N100 is roughly 50% faster than the N5095. When the DDR4 N100 Beelink goes on sale, it is priced competitively in relation to the performance of the N5095 Beelink, and if you have a use for 2.5 gigabit Ethernet, paying a bit more for that is a good value.
This is neat because you can decide just how simple or complicated you want your homelab to be. You could load all your virtual machines up on a single Ryzen 5500U box with 64 GB of RAM. You could build a little cluster of three Celeron N5095 boxes each with 32 GB of RAM.
There are a huge number of companies selling mini PCs. Many seem to be rebranded versions of the same hardware coming out of the same factory. Some have been found to ship sketchy malware along with their Windows installation.
This has been an option for a little while already, but my friend Brian Moses added the Topton N100 mini-ITX motherboard to his eBay shop this week, so I figured I ought to throw a mention of it in here.
The N100 motherboard is very similar to the N5105/N6005 motherboard Brian used in his 2023 DIY NAS build. The N100 has a lower TDP and somewhere around 40% more performance, which are both nice features. The N100 uses DDR5, which is faster but costs more.
There is already a bottom extender for a different CWWK server with the same case up on Printables. This is different than what I would have designed, but it sure looks like it would make room to mount a fan to the bottom plate!
My spare Tasmota smart plug tells me that my CWWK box sips between 8 and 9 watts at idle, and it maxes out between 25 and 27 watts with the CPU and NVMe running hard. If I could fill it with NVMe drives, it would definitely go a little higher. This works out to between 0.21 and 0.57 kWh per day.
I set up a loop to keep four openssl speed benchmarks running. On a desk in my office, the CPU stays up at 2.9 gHz nearly the entire time with a few dips to 2.7 gHz for a few seconds out of every minute. My infrared thermometer saw the top of the heatsink reach 138F. The case is aluminum, so you can still comfortably grab it and pick it up at that temperature.
I think my basic Proxmox install is burned in and working. Now I need to tear down some logical volumes and resize the physical volume so I can encrypt my virtual-machine storage. I also want to mirror that storage volume to my external USB drive with the write-mostly flag. I have never done that, so I want to see how it works out!
What do you think? Am I on the right track? Am I using the correct mini PC for my homelab server? Should I use a different box here at home and send this one away as my off-site Proxmox homelab server? Are you embracing mini PCs for your own home-server needs? Tell me about it in the comments, or stop by the Butter, What?! Discord server to chat with me about it!
It is a disk benchmark application that assesses the performance of sequential and random read/write operations of varying sizes on any storage medium. CrystalDiskMark is a freeware disk benchmark utility. It is handy for evaluating the speed of different storage devices, both portable and local storage. When it comes to reads and writes, CrystalDiskMark can measure sequential reads and writes as well as random reads and writes of 512 KB, 4 KB, 4 KB (Queue Depth = 32) size. It also supports different types of test data (Random, 0 Fill, 1 Fill), has basic theme support, and supports multilingual input. You can give it a shot for free because it is available for download. The SSD is returning some pretty good results to us in terms of performance. Simply compare the read and write performance of the other drives.
3a8082e126