Really, Really Fast Cloud Storage

For a long time, CPUs and memory got faster, but we were stuck with spinning disks that topped out at 15,000 RPM. We got SSDs which were orders of magnitude faster, but we were still ultimately limited by controller throughput. NVME changes all of that. Do you know what NVME storage is? If you don’t, you should read my column this month at Redmond Mag.

Did you know Azure now had VMs available with direct attached NVME storage? I waslooking at a client’s servers this week, and I was going to write a post about how storage latency in the cloud (specifically Azure with Premium Storage) is about 90-95% of most on-premises environments, based on what I was seeing on my customer system. Then I met the Lv2 series of VMs.

So just to give you a point of reference, this customer system is running on one of the largest VM types in Azure, the GS-5. We are running with 16 cores (of 32) for licensing reasons and we have 18 TB of premium storage presented to each VM in a single Storage Spaces pool, which gives us plenty of IOPs (~80,000). Remember though—premium storage is SSD, but it’s networked so our data needs to travel over cable to make it back to the machine. With that in mind I’ve been seeing single digit millisecond latency, as viewed from SQL Server’s sys.dm_io_virtual_file_stats DMV. I know there are other ways of better measuring IO performance using Performance Monitor or looking at the storage tier itself, but when my SQL numbers are that good, I generally don’t care.

I wrote my column with a focus on some of the newer persisted memory technologies—but some folks were kind enough to tell me that NVME drives were currently available in Azure. Microsoft is kind enough to allow MVPs to have a nice allowance in Azure—I spun up an L64s_V2 VM. Books online mentioned that the NVME was available, but when I logged into the VM, I didn’t see a volume mounted. I looked in in Storage Spaces and I found this magic.

Disks

You’ll need to use Storage Spaces within Windows to create a storage pool, and then create a virtual disk after that. I went ahead and used this post from Glenn Berry on how to use the DiskSpd tool from Microsoft. If you have ever used SQLIO to validate storage in the past, this is the more modern version of the tool. Anyway, onto the results.

thread        bytes      I/Os     MiB/s IOPs   AvgLat(ms) LatStdDev
     0       9654157312       1178486      306.90    39283.16     0.101      0.088
     1       6744514560        823305      214.40    27443.70     0.145      0.088
     2       9005244416       1099273      286.27    36642.71     0.108      0.134
     3       9004244992       1099151      286.24    36638.64     0.108      0.147
     4       9141108736       1115858      290.59    37195.54     0.107      0.088
     5       9164423168       1118704      291.33    37290.41     0.107      0.087
     6       9122758656       1113618      290.01    37120.88     0.107      0.086
     7       9144197120       1116235      290.69    37208.11     0.107      0.086
total: 70980648960       8664630     2256.43   288823.14     0.110      0.10

 

Two key things to look at in the above table—288,000 IOPs—holy crap batman. That’s a lot—I think I could get more by running more threads as well. Next is the latency—that’s .11 ms latency—that dot isn’t a typo. Really good traditional SANs have 1-3ms latencies, the crappy array your boss scored a deal on from Ron’s House of Unsupported Hardware is probably pushing 20-30 ms if you are lucky. This storage is 300x last latent than that off-brand array your boss got from Ron.

car-salesman-funnyjpg.jpg

Don’t buy your storage from Ron—go to Azure, AWS, or one of the many vendors offering this amazing technology.

Share

Leave a Reply

Your email address will not be published. Required fields are marked *

Trust DCAC with your data

Your data systems may be treading water today, but are they prepared for the next phase of your business growth?

Share via
Copy link
Powered by Social Snap