Monday, November 26, 2018

Synology DS1817+ Benchmarks of Spinning SATA, Spinning SATA with PCIe mSATA Cache, and 2.5 inch SSD drives

I am running a Synology DS 1817+ NAS device, and was curious what the impact was of the different drive types in my homelab scenario.  I want to know what the bottleneck is and if I am really just a damn idiot for not running multiple NIC ESXi hosts and a smart switch with multiple LAG ports, etc. as people love to recommend based on no real data in the real world of a small homelab.

I fully admit that my homelab is not ideal, nor fully optimized for speed, nor is this a scientific test of multiple VMs merking the storage in some worst case scenario.  This is just a baseline.  If you want to see more data, run the benchmark on your gear under your scenario, or publish your own data based on your methods.

Here are the specifics of my homelab:

1. I am not using Jumbo frames
2. I am virtualized using 6th Gen I3 based Intel NUC ESXi hosts with single gigabit NICs.  No additional NICs are being used.  Are they my bottleneck??
3. I am running a dumb, unmanaged 8 port Dlink gigabit switch.  Is this a problem?
4. I am running from a single network interface on my Synology device.
5. I do not care to run faster infrastructure or ESXi host gear, as my stuff is plenty fast and I would like to expose the goons that think they have to have gear that is more complicated and power hungry than mine just so that their VMs can run slower than mine.
6. I am running ESXi 6.5 and connecting to the synology via iSCSi.

I want to diffuse any assumptions based on theory rather than fact that a proper home lab 'needs' enterprise grade gear.  Enterprise grade gear is great to learn on, but most management interfaces are available for learning in virtual environments via virtual appliances if you google around to find them, so the 'need' for true enterprise gear in your home is typically minimal.

I want to see the value in running either PCIe SSD based cache or SSD SATA drives.  Which is best?

This is an incredibly simple and repeatable benchmark done in Windows with the free HDTune utility.  No settings were changed within the program.  I hit the start button and ran it three times.  Once with the VM on SATA spinning disks, once with the VM on SATA spinning disks backed by PCIe SATA cache, and once with the VM on SATA SSD drives.

In the above image, we are seeing the utility that has run against a Windows VM with updated VMware tools installed and running on 5x 7200 RPM Segate SATA drives in a RAID 6 configured with 2 drive redundancy.


In the above image, we are seeing the utility has run against a Windows VM with updated VMware tools installed and running on 5x 7200 RPM Segate SATA drives in a RAID 6 configured with 2 drive redundancy backed by a Synology M2D17 M.2 Adapter Card that has 2x Crucial MX300 275GB M.2 (2280) Internal Solid State Drive - CT275MX300SSD4 configured in a mirror array with both read and write caching enabled.


In the above image, we are seeing the utility has run against a Windows VM with updated VMware tools installed and running on 2x Mushkin REACTOR 1TB Internal Solid State Drive (SSD) 2.5 Inch SATA III 6Gb/s MLC 7mm MKNSSDRE1TB 

What can we conclude from the above data?
1. You can see that the PCIe SSD offers faster burst times and faster average times as well.  This is probably the result of the benchmark data running fully from the SSD Cache and not from the spinning disk.  However the access times are only slightly better than spinning disk and the pure SATA SSD drive wipes the floor with both of them in access time.
2. We can also conclude that the SATA interface is a bottleneck vs the PCIe cache as well.  I don't know if the bottleneck is at the Synology or the Muskin, but the SATA interface is definitely is not bottlenecking at the network since the PCIe benchmark is faster.
3. It is possible that the PCIe Cache is bottlenecking at the network.  I don't know since I cannot run it on a faster network infrastructure to see.

Now having run this, we know that the network speed is important, and probably more important as you have many VMs slamming the disk subsystem of any configuration at the same time.  This is an example of a single VM hitting the storage at a time.
Many hitting it at the same time would expose more bottlenecks in relation to the network speed.
However, this benchmark does tell us that in smaller home labs, getting away with a single gig network interface is acceptable even if your disk subsystem is really fast.


No comments:

Post a Comment