Follow Datanami:
July 22, 2020

Persistent Memory Can Change the Way Enterprises Navigate Advanced Analytics

Alper Ilkbahar

175 zettabytes. That is IDC’s prediction for how much data will exist in the world by 2025. This data is generated by millions of devices – everything from the cell phone in my pocket and PC in my home office, to the factory down the road leveraging IoT and automation. While many enterprises are unlocking the value of their data by leveraging advanced analytics, others struggle to create value in a cost-effective way.

Imagine the difference between going to your desk to get a piece of information, versus going to the library, versus traveling from California to Intel’s campus in Oregon – or even traveling all the way to Mars or even Pluto to get this information. These distances illustrate the huge chasm in latency between memory and storage in many of today’s software and hardware architectures, and as these datasets used for analytics continue to grow larger, the limits of DRAM memory capacity become more apparent.

Keeping hot data closer to the CPU has become increasingly difficult in these capacity-limited situations.  For the past 15 years, software and hardware architects have had to make the painful tradeoff between putting all their data in storage (SSDs), which is slow, or paying high prices for memory (DRAM). Over the years, it has become a given for architects to make this decision. So how can enterprises bridge the gap between SSDs and DRAM, while reducing the distances between where data is stored to make it readily available for data analytics?

(Preechar Bowonkitwanchai/Shutterstock)

Persistent memory solves these problems by providing a new tier for hot data between DRAM and SSDs. The new tier allows an enterprise to architect either two-tier memory applications or two-tier storage applications.  While it is not a new concept to have tiers, this new persistent memory tier with combined memory and storage capability allows architects to match the right tool to the workload. The result is reduced wait times and more efficient use of compute resources, allowing enterprises to drive cost savings and massive performance increases that help them achieve business results, while at the same time maintaining more tools in the toolboxes that support their digital transformations.  Enterprises will also benefit from innovations and discoveries from the software ecosystem as it evolves to support it.

Applications for Advanced Analytics and Persistent Memory

Persistent memory is particularly useful for enterprises looking to do more with their data and affordably extract actionable insights to make quick decisions. The benefits of persistent memory are especially valuable for industries in the midst of digital transformation, like financial services and retail, where real-time analytics provide tremendous value.

For financial services organizations, real-time analytics workloads could include real-time credit card fraud detection or low-latency financial trading. For online retail, real-time data analytics can speed decisions to adjust supply chain strategies when there is a run on certain products, while at the same time immediately generating new recommendations to customers to shape and guide their shopping experience.

Persistent memory can also expedite recommendations for the next three videos to watch on TikTok or YouTube, keeping consumers engaged for a longer period of time. In these scenarios, real-time analytics allows these organizations to interact with their end users more instantaneously, improving experiences for end users and enabling the business to achieve a better return on investment. While these real-time analytics applications would be possible without persistent memory, it would be costly to maintain the same level of performance and latency.

Architecting for Persistent Memory

For those looking for off the shelf solutions without application changes, the easiest way to adopt persistent memory is to utilize it in Memory Mode to achieve large memory capacity more affordably with performance close to that of DRAM, depending on the workload. In Memory Mode, the CPU memory controller sees all of the persistent memory capacity as volatile system memory (without persistence), while using the DRAM as cache. 

Many database and analytics software or appliance vendors such as SAP HANA, Oracle Exadata, Aerospike, Kx, Redis, and Apache Spark now enable and have released new versions of software that utilizes the full capabilities of both application-aware placement of data and persistence in memory offered with persistent memory. A variety of applications along with the operating system and hypervisor that is aware of persistent memory are available in the ecosystem to be deployed by a customer’s preferred server vendor.

A new class of software products are also emerging in the market that remove the need to modify individual applications for the full capabilities of persistent memory. Software such as Formulus Black FORSA, Memverge Memory Machine, and NetApp Maxdata are truly groundbreaking approaches to the new tiered data paradigm that bring the value of persistent memory while minimizing application-specific enabling.

For those who want full customization, software developers also have the option to utilize the industry-standard non-volatile memory programming language model with the help of open source programming libraries such as PMDK.

Persistent Memory: The Art of Possible 

We are living in a time unlike any other. Never has it been more important to analyze data in real-time. With the help of persistent memory, businesses can now make more strategic decisions, better support remote workforces and improve end-user experiences.

In just the last six months alone, I’ve been impressed by the use cases and innovation I’ve seen our customers implement with persistent memory. Looking to the future, I am excited to watch persistent memory, and the ecosystem that has evolved to support it, continue to bring ideas, dreams and concepts to life – making the impossible, possible.

About the author: Alper Ilkbahar is vice president in the Data Platforms Group and General Manager of Memory and Storage Products Group at Intel Corporation. He oversees the development and enablement of memory and storage solutions by integrating innovative hardware and software for next-generation data centers. A veteran of the semiconductor and storage industries, Ilkbahar rejoined Intel in 2016 from SanDisk Corporation (acquired by Western Digital Technologies Inc.). Most recently at SanDisk, he was vice president and general manager of storage-class memory solutions, a business unit dedicated to non-volatile, high-performance memory technology. He led the group’s commercialization efforts, overseeing engineering, marketing and ecosystem development. 

Related Items:

Intel Turbocharges Spark Workloads with Optane DCPMM

Intel Builds Analytics, Database Use Cases for Optane

Intel Updates Optane, Expands NAND SSD Offerings

 

Datanami