Why Will NVMe and NVMeOF Dominate the Land? 

Written by Erik Weaver

Written by Erik Weaver

Published on April 15, 2020

Did you know the new AVID Media Composer will support a 32-bit full-float color pipeline for finishing and delivery?

Current HDR10 requires a minimum of 10-bit, which, as mentioned, requires 1.2 GB/s of throughput.  Now consider 32-bit! What is that going to require of your pipeline in throughput at that velocity & volume?

Side Note: There are four V’s in data: volume, variety, veracity, and velocity.  Well, really five if you add value, but that is another conversation.  For this conversation we will focus on just one: velocity.  Velocity is the frequency of incoming data that needs to be processed.

NVMe Offers 6x the Speed!

Let’s get into the weeds about why traditional hardware is bottlenecked on the velocity side.

A typical CPU with 64 cores uses 1 of 3 ways to get data stored on SSDs: SATA, SAS or NVMe.  SATA and SAS are aging protocols designed to work with hard drives; whereas, NVMe is designed for SSDs bandwidth.

  • SATA Bus 600 MB/s per lane
  • SAS 1000 MB/s per lane
  • NVMe 6GB/s transfer per lane

Getting Geeky

Let’s get really geeky. Beyond all that above, you have several bottlenecks, starting with HBA, and I don’t mean Hollywood Beer Alliance. I mean controller chips. The HBA causes bottlenecks in that it must translate SATA & SAS to PCI express, which your CPU natively speaks.  NVMe was defined as PCI express native memory speed, otherwise going straight to the CPU.  “No Bottleneck!” The HBA also affects Interrupt Request (IRQ) capabilities. As applications request data, the HPAs can interrupt only 1 of the 64 cores. What that means is that as applications request data they can only interrupt one core, causing delays.  NVMe can handle 256.  In addition, SATA and SAS have something called “command queues.” Command queues allow you to operate on data parallel to the CPU.  SATA has a limit of 32, and SAS has 256, yet NVMe has 265,000. Without this the CPU cores stand idle.

Needless to say, all of these things lead to bottlenecks with SATA and SAS. All of this allows us to see about 10x + the performance from NVMe.

What’s crazy is this technology can run on commodity hardware or cost-effective technology solutions.  In 2018/2019 a glut of SSDs came on the market, dropping prices by as much as 70% and making this technology much more affordable.

Ok, but what is NVMeOF?  It’s NVMe Over Fabrics.  It allows the NVMe protocol to go over a network link instead of internal buses over a specific server, allowing network access to the storage. This can leverage Ethernet, Fiber, or InfiniBand.

If you combine 10x performance and a declining cost against the growing velocity of data, you will see NVMe and NVMeOF dominating in the community.

NVMeOF Meets Key-Value Store 

Stellus Technologies built their software on NVMeOF technology and combined it with a Key-Value Store to create Key-Value Over Fabrics technology for even added benefits to the customer. Check it out, along with information about the Stellus Data Platform.

 

Related Post

What Makes HDR Video So Special?

What Makes HDR Video So Special?

What Makes HDR Video So Special? Most people can appreciate the art of beautifully implemented cinematography, yet one might argue that those of us who are fans of classic cinema and television are especially attuned to the miracle that high dynamic range (HDR).
Hollywood Has a Velocity Problem

Hollywood Has a Velocity Problem

What do Amazon Web Services and Facebook have in common? Of course, they are both fantastically successful, but they also share something else. Technologically, they are both back-ended by Key-Value Stores. So what is a Key-Value Store (KVS)?successful, but they also
Optimizing Data Locality for Efficient Data Management

Optimizing Data Locality for Efficient Data Management

Data locality optimizing algorithms work to reduce the time it takes to retrieve data from a network. This post looks at what role they can play in your enterprise data management. Data locality is a basic computing principle that

Cognitive AI

Artificial intelligence management requires massive data sets and high-speed processing to achieve the degree of efficiency and accuracy necessary to train neural networks and establish actionable insights. Through innovative software and services, Stellus Data Platform empowers and inspires customers around the world to transform data into intelligence.

Read Solution

Media & Entertainment

The Stellus Data Platform (SDP) sets a new standard for media storage performance, empowering M&E companies to support more workloads, more simultaneous playback streams, and faster render times. Unlike architectures that waste resources on tasks irrelevant to modern storage, the SDP is an entirely new file system, built from the ground up for unstructured data and solid-state media.

Read Solution

Life Science

Stellus is rewriting the rules for life sciences computing. The Stellus Data Platform (SDP) file system is built from the ground up for storing and processing unstructured data. It enables organizations to dramatically accelerate application performance for genomic analysis. Researchers can now process more workloads in far less time and take concrete steps to enable personalized medicine therapies.

Read Solution

Stellus Data Platform

Stellus provides a modern, highly performant file storage platform that enables hyperscale throughput for digital enterprises. Based on key-value store technology and exceptionally suited for unstructured data.

Learn More

Solution Brief- Genomics

Unlock the Targeted Therapies of the Future

Read more

Solution Brief- M&E

Transform Media Economics with Breakthrough Capacity & Performance

Read more

Solution Brief- Cryo-EM

Break the Backlog for High-Speed Microscopy

Read more