Home Internet of Things Aerospace Apparel Energy Defense Health Care Logistics Manufacturing Retail

From Cars to Data Centers: Driving the Internet of Things with NVMe

The incredibly fast bandwidth advantages and high queue depths of Non-Volatile Memory Express SSDs enable faster and more efficient big data analysis.
By Ulrich Hansen

Built for Fast Data, Edge Gateway Processing
IoT devices are not always connected at the edge or with other devices, and may require gateways for connectivity. A gateway serves as an aggregation point at which captured data can also be analyzed. The aggregated data is processed at the edge from a myriad of local sensors that provide two main functions for the IoT apps in question: autonomous response and aggregation. By responding quickly to easy-to-handle events, in addition to filtering a deluge of data into only interesting bits, these gateways reduce the data traffic pressure on both the backhaul communications infrastructure as well as on core data processing.

Similar to the sensors with which they communicate, edge gateways are available in a variety of classes to match specific user needs. The smallest edge gateways contain an embedded-class system-on-a-chip (SoC) processor that utilizes such flash interfaces as UFS or e.MMC for storage. At the higher end are edge gateways with server-class CPUs, gigabytes of capacity and full NVMe compliance. Regardless of the class of gateway deployed, those units supported by NVMe accelerate data processing in two ways: by reducing I/O latency or by increasing I/O parallelism.

NVMe reduces I/O latency through its direct CPU connection so that compliant SSDs can speak directly to the processor without a storage controller chip or protocol conversion that slows overall performance. As a result, I/O latencies can potentially be reduced to the low tens of microseconds versus more than a hundred microseconds for many SATA or SAS SSD deployments.

NVMe can also increase I/O parallelism, by which millions of sensor inputs are aggregated. In these scenarios, data aggregation driven by an edge gateway can involve many small parallel I/O operations. Though individual sensor reports are typically small, there can be hundreds to thousands of these reports delivered every second, requiring a very high I/O queue depth to achieve good performance. While SATA- and SAS-based SSDs are limited to a maximum of 32 or 256 outstanding I/O operations at any time, NVMe-based SSDs can support up to 64,000 queues, each of which may contain hundreds of I/O operations. Though individual NVMe SSDs will vary by the number of queues that are actually supported, in general, they provide far higher parallelism versus what SATA or SAS SSDs can deliver.

Built for Big Data, Core Processing
Data that is transmitted from hundreds or thousands of edge gateways are converted into knowledge and actions by performing core processing in the cloud. Whereas an edge gateway may handle a single factory floor or city block, a centralized processing core is built to handle an entire company's set of factories or an entire civil infrastructure. Aggregating and processing this type of big data involves tools such as Apache Spark and Apache Hadoop that take clusters of servers and meld them into a single unified application stack.

Login and post your comment!

Not a member?

Signup for an account now to access all of the features of RFIDJournal.com!

PREMIUM CONTENT
Case Studies Features Best Practices How-Tos
RFID JOURNAL EVENTS
Live Events Virtual Events Webinars
ASK THE EXPERTS
Simply enter a question for our experts.
TAKE THE POLL
JOIN THE CONVERSATION ON TWITTER
Loading
RFID Journal LIVE! RFID in Health Care LIVE! LatAm LIVE! Brasil LIVE! Europe RFID Connect Virtual Events RFID Journal Awards Webinars Presentations