The Future of Data Center Storage: Local and

The Future of Data Center Storage
Is Today: Local and Shared Storage
Benefits Finally Converge
Storage architects can now combine the benefits of local storage (affordable, scalable
and fast) and shared storage (easy management, feature-rich and highly available) with
the performance of NVMeTM SSDs to create architectures that provide performance,
scale and cost benefits that far outweigh current storage options. The advent of
low-latency SSDs and the means to share them via standard infrastructure, such
as NVMe over FabricsTM (NVMe-oF TM ), has created a new standard for data center
environments. This paper gives a brief history of data center storage and shows how
NVMe storage is the next great data center disruptor, bringing benefits that truly
Server-Local Storage for Local Applications
The idea of server-local storage is simple: A server runs one or more applications that need to
store or retrieve persistent data. Initially, we used HDDs installed directly into the application
servers. How we interfaced with these HDDs varied, but we typically used either of two
common interfaces: ATA and/or SCSI (both of which could be implemented as either a parallel
or serial physical connection). While these two interfaces were broadly adopted, each was
designed for slow spinning media, which was the pervasive storage technology, at the time.
Scalability of these devices was severely limited.
With the HDDs residing inside the application server, only the application(s) running on that
server had access to the data. To manage these HDDs, we had to run an application locally
on each application server for just that purpose, resulting in a complex set of processes and
procedures to ensure our data was healthy. While the concept of sharing storage devices had
been considered, early technology was not up to the task. It was only in the late 1990s that
the ability to centralize and share storage resources became technologically and economically
feasible, letting us aggregate, manage and share storage more easily.
Shared NVMe storage is the next great
data center disruptor.
The Future of Data Center Storage Is Today: Local and Shared Storage Benefits Converge
Centralizing Storage: Storage Area Networks
Brought Easier Management, More Control
TThe need for more control over data center storage resulted in the emergence of a
storage area networks (SANs). The idea was to move HDDs out of individual servers
into a centralized set of storage devices in a single location within the data center. Using
specialized management software, it let us create and assign one or more HDDs or logical
unit numbers (LUNs) to individual servers through a network infrastructure similar to the
local area networks (LANs) already connecting servers and client computers together.
Operational Benefits of Centralizing Storage
• Simplified Management: We could more easily create logical volumes for application
server use, assign each volume to one or more servers, configure centralized
high availability mechanisms, improve efficiency through deduplication and thinprovisioning, apply advanced security measures to ensure that servers only accessed
the volumes (LUNs) that they had rights to, and expand the SAN by adding additional
storage resources such as disks, disk enclosures, storage controllers or network
interfaces to support more application servers.
• Simplified Monitoring: We could also see and use individual disk information (disk
utilization, power use, error and retry counts and a litany of other statistics that
improve management and efficiency), array level information (parity levels, LUN health
and status, security and overall SAN performance throughput, IOPS, average latency,
and many other measures).
• Better Cost Efficiency: We could amortize storage and management costs over a
large number of application servers.
• Improved Scalability: We could scale the number of storage devices beyond what was
possible within individual servers.
As SAN adoption grew, we looked to new advanced features to help improve data availability
and better protect against data loss, as shown in Table 1.
Creating duplicate data and moving the duplicate from one location to another to provide an accessible
backup in case of a localized data center or SAN failure
Reducing the size of data saved to the SAN using advanced mathematical algorithms. This can be done in real
time or as a post-save background process.
Thin Provisioning
Creating logical volumes that are not initially assigned all of the physical space on the disks required by
the volume. This allows administrators to buy less storage up-front, reducing the initial cost of the SAN. As
volumes start to use the space actually allocated, the SAN administrator can add additional storage to the SAN
on an as-needed basis.
Generating point-in-time “images” or “copies” of the volume. This allows volume to be managed without
bringing the volume offline. Tasks such as backup and replication can then be performed in the background.
Managing what servers get to see what storage resources based on each application server providing
validated identification or certificates.
Table 1: Typical SAN Advanced Storage Features
The Future of Data Center Storage Is Today: Local and Shared Storage Benefits Converge
When SANs were first implemented, these additional storage features were not readily
available. Early HDDs were very slow and offered only limited capacity and drive
controller capability. Available processors and SAN network connection bandwidth were
also limited. As these resources improved, vendors added more valuable services. As
HDD capacities increased from megabyte to gigabyte — and now into multiple terabyte
sizes — many of these services (even when running on advanced controllers and network
infrastructures) simply could not keep up. RAID array rebuilds (after a drive failure and
replacement) were taking longer, which resulted in a greater risk of losing another HDD
in the array before the first failure was fully recovered. We needed something other
than spinning disks to help bring SAN services back into a lower-risk environment. That
something was SSDs.
SSDs Brought New Capabilities for
Server-Local Storage
With the introduction of SSDs, we now had a faster storage solution for enterprise data
centers. Initially, SSDs offered low capacity and were very expensive, which resulted in
their limited use in the data center. Over time, we saw SSD capacities increase and prices
decrease. SSDs became more prevalent in solutions that required high performance.
Ultimately, we saw SSDs that were optimized for specific use cases based on an
application’s read/write profile needs. These SSDs were targeted at read-intensive, mixeduse and write-intensive workloads.
Because of the small size and price premium when compared to HDDs, SSDs were initially
deployed as server-local storage (which brings us back to the pre-SAN era of storage
deployment). Because SSDs are small in size, there was little risk of having locked-up
excess capacity, but there was the possibility of locking up performance (IOPS). Having no
moving parts, SSDs have access times that are much lower than HDDs. Soon after their
introduction, we saw a new class of modern applications emerge. Applications that used
highly distributed, scalable, shared-nothing architectures. High-IOPS performance, serverlocal storage fit their requirements very well. These applications consume SSD IOPS very
As these applications became more critical for business success, we started thinking of
storage value in different terms than in dollars per gigabyte, which was the dominant
metric at the time. With these new applications, we started to think of different ways of
understanding storage value. As these applications became more ubiquitous, metrics like
cost per IOPS, watts per IOPS and IOPS per rack space became more widely adopted.
The Future of Data Center Storage Is Today: Local and Shared Storage Benefits Converge
Server-local storage became popular again as the favored method of storage
deployment. Application performance was our penultimate goal and maximum
performance had always required server-local devices. We now had high-performance
storage devices that were readily accessible and provided immense read/write IOPS
performance with low and consistent latency. However, we were still using legacy SAS
and SATA interfaces to access these devices. While these interfaces allowed for the rapid
introduction and growth of the SSD market, they were still designed for spinning media
and their associated protocols, SCSI and AHCI respectively. These legacy interfaces
and protocols addressed storage in terms of cylinders, sectors and heads, which were
components that SSDs did not have, requiring a complex translation for each and every
IO operation. This created the next roadblock to getting the maximum value from our
new SSDs.
It was only within the last five years or so that this bottleneck was addressed. Since we
were once again using server-local storage, we could use the PCI Express® bus (PCIe®)
interface. In its current implementation, PCIe can provide up to three gigabytes of
bandwidth to each device. This was much faster than what SAS (12 gigabits) and SATA (6
gigabits) offered (1/2 to 1/4 the speed of Gen 3 PCIe). Also, SAS and SATA required an
intervening controller to do all of the translations, as shown in Figure 1, which further
increased complexity and limited capability.
Figure 1 shows some of the complexity of the protocol conversions and burden placed
on the host by SAS and SATA (shown in orange).
PCIe brought us a new interface to help remove the hardware bottleneck, but we still
needed to remove the software bottleneck created by the legacy protocols. What we
needed was a protocol that addressed storage as memory and not as a series of spinning
disks, like a nonvolatile memory access protocol.
PCIe bus
Figure 1: Data Throughput Limits
The Future of Data Center Storage Is Today: Local and Shared Storage Benefits Converge
NVMe: A New Horizon
To simplify the inherent complexity and limitations of legacy options like SATA and
SAS and bring the capabilities of flash “closer” to the CPU to improve application
performance, the industry needed a new protocol between nonvolatile media, servers
and applications accessing data. The new protocol had to satisfy several requirements:
• Be designed for nonvolatile media
• Be simpler than previous protocols
• Provide support for parallel access to perform more storage IOs with the same
resources and take advantage of the speed of SSDs
• Minimize additional demands on the server’s CPUs, allowing those processors to be
used by the application rather than managing storage
The key benefits of such a protocol had to include:
• Support greater queue depths for higher IOPS performance and higher throughput
per device.
• Provide lower latency by allowing more IOs to be completed in parallel, reducing wait
times for each IO.
• Improve performance-focused ROI due to increased IOPS per dollar of invested
In March 2011, these needs were addressed by the NVM Express Working Group who
released the NVM Express® (NVMe) protocol specification. The NVMe protocol dictated
the use of the PCIe interface to ensure that NVMe devices had the maximum bandwidth
available in a server. With the release of the NVMe protocol, SSDs could be simpler,
faster and unencumbered by the legacy burden of SATA or SAS. What made NVMe so
well suited for SSDs was the reduced IO complexity, an increased number of IO queues
available, and an increased number of commands each queue can host before having to
pause the application, as shown in Table 2.
Designed For
Message per I/O
Number of
Commands per
Solid State
Spinning Media
Spinning Media
Table 2: Storage Protocol Comparison
The Future of Data Center Storage Is Today: Local and Shared Storage Benefits Converge
NVMe-based SSDs had a shorter path between the SSD and the application that resulted
in much higher IOPS performance, as shown in Figure 2. With the benefits of NVMe SSDs,
the emerging “shared-nothing” applications got the performance they were craving. We
saw significant gains in emerging as well as distributed shared-nothing applications. More
applications started to rely on additional capabilities and the very low latency of NVMe.
Figure 3 compares the average latency of NVMe, SAS and SATA SSDs and a typical
performance-focused (15,000 RPM) HDD. The NVMe SSD shows an average latency of
just 0.03ms (30µs). The SAS SSD is about 3X higher, the SATA SSD is 16X higher and the
HDD is 66X higher.
PCIe bus
Figure 2: Shorter Data Path Using NVMe
15K RPM HDD: 66X higher
milliseconds (ms)
16X higher
3X higher
NVMe SSD: 0.03ms
Figure 3: Average Latency 1
Note: Latency values as per public data sheets for the Micron 9100, S600 and 5100 series SSDs. HDD data was
averaged across multiple data sheets from several vendors/models. Public data sheets may use I/O sizes and
queue depths for stated values; no standard exists.
Testing the NVMe Performance Advantage
The performance advantage of NVMe has been proven in our testing.
For example, we added a single NVMe SSD to each node in an existing HDD-based
Hadoop cluster and found that adding just one NVMe SSD improved the benchmark
results by 36% without adding additional nodes to the cluster. See Micron’s technical
brief, “36% Faster Hadoop – Without Adding Servers: Add One 9100PRO to
HDD-Based Nodes, See 36% Average Faster Run,” for more information.
In another study, we determined that server-local NVMe SSDs could improve
distributed databases like Apache Cassandra™ software (an application traditionally
run on HDD-based nodes). We found that NVMe inside the Cassandra (server) nodes
resulted in amazing performance gains. See Micron’s technical brief, “Micron 9100
MAX PCIe NVMe SSD: Energy Drink for Apache Cassandra,” for more information.
The Future of Data Center Storage Is Today: Local and Shared Storage Benefits Converge
The NVMe Dilemma
While the IOPS performance and low average latency of NVMe are excellent, the fact that
NVMe has to be deployed within each individual server brought back our management
• The power of NVMe SSDs remains ”bottled up” inside the server. All the benefits of
NVMe are server-local, resulting in potentially wasted available IOPS performance and
low latency.
• We can’t easily manage all NVMe storage from one place, requiring us to go to each
server to monitor and manage that server’s storage.
Plus, NVMe SSDs still carry a price premium (at least in the near-term).
Remember that we introduced the SAN to move server-local spinning media storage into
a centralized solution. We used remote data networks (adapted SCSI protocols and Fibre
Channel or Ethernet as the connecting technology) for management and sharing. We can
now do the same with NVMe.
The Best of Both Worlds: NVMe-oF
Extending the high performance and low latency of NVMe across the data center to
multiple servers and applications with the ability to actively and dynamically scale our
NVMe to meet demands brings us all the benefits of NVMe as well as the best features
of centralized, shared storage without the compromises of legacy SANs, namely lower
performance due to slow protocols and slow network infrastructures.
The NVM Express Working Group introduced their NVMe-oF specification in 2016. The
specification provides for a common architecture to transmit the NVMe protocol across
advanced storage networks for better delivery and performance. Infiniband, RDMA over
Converged Ethernet (RoCE) and Fibre Channel networks would be good options.
Sharing NVMe offers tremendous benefits:
• Allowing more application servers access to NVMe
• Providing server-local class performance from remotely located and managed devices
• Enabling storage vendors to offer NVMe-native solutions with the benefits of
traditional SAN and without the limitations of legacy SANs
• Enabling NVMe resources to be added to the data center without affecting running
application servers
• Monitoring and managing individual NVMe devices from a central location
• Independently scaling compute and storage resources
The Future of Data Center Storage Is Today: Local and Shared Storage Benefits Converge
Note that NVMe-oF simply provides a protocol for connecting storage initiators to NVMe
storage devices; it does not provide any specification for advanced storage services. By
defining the basic methods to connect initiators and targets, NVMe-oF addresses one of
the biggest problems in implementing centralized NVMe.
By defining only the storage protocol and not any additional layered services, innovators
are free to implement solutions that address their customers’ needs. In particular,
because NVMe is based on the PCIe interface, it provides deployment flexibility
Figure 4 shows examples of three of today's deployments options. The first is a storagecentric, SAN-like architecture where application servers host no local NVMe storage and
exclusively use centralized NVMe devices managed by one or more “storage servers.”
In the second compute-centric deployment, application servers share their server-local
storage with other servers in the data center through the high-performance, low-latency
network. In the third deployment, we can deploy both types at the same time, managed
by the same tools. Whether the NVMe devices are server-local NVMe or remote NVMe
devices, all application servers can access NVMe resources from all nodes connected to
the network.
RoCE fabric
RoCE fabric
storage pool
Storage pool
Client network
RoCE fabric
Client network
Figure 3: Examples of NVMe-oF Deployment Models
Storage pool
The Future of Data Center Storage Is Today: Local and Shared Storage Benefits Converge
Storage-Centric Deployment
With a storage-centric NVMe deployment, we can now build a cluster of nodes,
each with internal NVMe storage, and share that storage via the high-bandwidth,
low-latency fabric. From this storage pool, we can configure and provision NVMe
storage and enable application servers to use it. To the application servers, these
NVMe volumes appear as local NVMe storage devices. We can do all of the things
we can with server-local NVMe, such as create partitions and combine NVMe
devices as RAID volumes. While it is not obvious, these remote NVMe devices
can be physical or if the vendor provided the additional functionality, they can be
logical NVMe devices (enabling the storage cluster to allow device definitions that
use multiple physical NVMe SSDs or definitions that use only a part of one or more
NVMe SSDs and allow for centralized RAID or other advanced storage services to be
For example, Figure 5 shows a cluster of storage-centric platforms configured to
share their NVMe devices. Note that these are shared volumes that span NVMe
devices in a single node (shown in green), across multiple nodes (shown in blue), or
both — or they can be allocated using only a small portion of each NVMe device.
The application servers to the right then map the shared NVMe volumes and connect
to them via a high-bandwidth, low-latency fabric. These mapped NVMe volumes
appear as local NVMe SSDs to the application servers.
Compute-Centric Deployment
Figure 6 shows a compute-centric, shareable NVMe solution. This example is similar
to the previous in that the application servers use shared logical NVMe devices.
A compute-centric deployment lets us build our application servers with serverlocal NVMe storage and still run local applications. We can enable these application
servers to contribute their server-local NVMe devices to the shared pool of NVMe
and allocate logical volumes to other application servers in the cluster or outside the
cluster using the high-bandwidth, low-latency fabric — while still running their own
local application(s).
fabric switch
Application servers
with shareable
NVMe storage
RoCE fabric
Shared NVMe storage pool (cluster)
Application servers
Local application
Figure 6: Compute-Centric Example
Figure 5: Storage-Centric Example
Application servers
without NVMe
The Future of Data Center Storage Is Today: Local and Shared Storage Benefits Converge
Figure 6 shows a group of application servers with local NVMe on the left with two
examples of shareable logical volumes (one shown in blue and one shown in green). As
with the storage-centric example, the shared logical volumes can span multiple NVMe
devices, multiple nodes or both — or can be allocated using only a small portion of each
NVMe device.
On the right, additional application servers without local NVMe devices can map a
shared volume from the cluster (shown in blue). These volumes would appear as local
NVMe storage devices, and we can configure the storage as we would with serverlocal NVMe storage. By sharing their NVMe storage with the servers on the right, we
can independently balance demand and resources across application servers more
Sharing NVMe over high-bandwidth, low-latency fabrics would offer multiple benefits
from both the management and application perspectives:
• Storage Efficiency: By leveraging shared NVMe, we can use an optimized, storagecentric pool or extra capabilities within application servers (housing local, shareable
storage) to improve NVMe utilization for each server, tuning the associated pool to the
app running only on that server.
• Simplified Administration: Shared NVMe can be easily managed from a single pane
of glass. With modern, high-bandwidth networks we can do this without concern over
• Scaling: Shared NVMe-oF can enable a building block approach that we didn’t have
with server-local NVMe. We can scale storage independently from our compute
resources. We can simply add nodes to the NVMe-oF cluster when additional storage
resources are required or add application servers as needed to meet the workload
• Performance: Traditional shared and local storage has been meeting growing
application demands with performance evolution culminating in NVMe. Now that
we can easily share NVMe, we can broaden its use and bring its benefit to more
applications, more workloads and more users.
The Future of Data Center Storage Is Today: Local and Shared Storage Benefits Converge
Ideal Applications for Shared NVMe
Sharing NVMe takes it even further. The following are examples of the application that
can take full advantage of NVMe today as well as shared NVMe as it becomes more
readily available.
Transaction Processing
One of the most demanding workloads in the enterprise is transaction processing. It thrives on
low-latency storage for fast, consistent responses. Shared NVMe will bring the performance of
NVMe to more servers more easily. By creating volumes by spanning across NVMe nodes, each
server’s storage can be optimized by creating use-tuned data volumes.
Server Virtualization
A common concern with growing virtualization is the underlying storage system’s ability to manage
small, highly random IOs while maintaining very fast response time. SSDs in general — and NVMe
SSDs in particular — mange highly random IO traffic very easily. As NVMe brings that closer to the
host processor, latency can improve as well.
Real-Time Analytics
Real-time analytic solutions typically rely on server-local storage in a shared-nothing architecture.
This requires additional compute servers to add storage, and vice versa. Using a shared NVMe-oF
platform lets us manage and process even larger data sets because we can easily scale our NVMe
storage independently of our compute resources.
Virtualized Desktop
Infrastructure (VDI)
VDI deployments rely on flexibility and the ability to grow IOPS performance in near real-time to help
manage boot storms and unforecasted demand. Shareable NVMe lets VDI administrators do exactly
that easily and quickly through centralized management increased storage capacity to meet these
demands without affecting the VDI host infrastructure.
Table 3: Ideal Applications for Share NVMe
Data Center Evolution With Shared NVMe
As applications and expectations evolve, the data centers supporting them must also
evolve. We tend to design, deploy and operate differently depending on the application
and its role.
Emerging systems focus on different metrics for success — emphasizing speed,
agility and adaptability. Legacy systems tend to focus more on stability, reliability and
availability (a more conservative approach for these “line of business” applications).
Creating a data center that can support both emerging and legacy solutions (what
Gartner calls the “bi-modal data center”) has been difficult. Transitioning an existing
legacy-focused data center to one that can support both solution types has been even
more challenging due to entrenched architectures — many of which do not support
emerging applications. When we look at the requirements of each type of system, it
becomes clear that they are clearly divergent.
The Future of Data Center Storage Is Today: Local and Shared Storage Benefits Converge
Emerging Systems
Emerging systems such as big data and real-time analytics, as well as No SQL and object
database solutions, rely on easy, scale-out, data-aware infrastructures. These systems
and applications demand performance and they become more robust through replicated
data and a share-nothing infrastructure using local storage. These systems support
the ability to add application server nodes to the system to scale performance and/or
capacity, effectively building a modular plug-n-play component strategy at the server
Shared-nothing, building block designs can create challenges for data center budgets
and ROI:
• As we scale our infrastructure, we also scale the number of systems we have to
• Increasing systems count brings additional support costs, such as having spares
stocked and managing patches, as well as update validation complexity.
• The need to add both compute and storage resources to the solution regardless of
whether we need both or not.
Shared NVMe alleviates many of these challenges. NVMe itself provides the fastest
storage solution available for these emerging applications. And shared NVMe brings that
advantage to more systems and more applications. It also lets us scale storage and compute
infrastructures independently, tuning each to match current and future demands. And we can
better control support costs with independent scaling.
Legacy Systems
Legacy systems are more focused on stability, reliability and availability. They can rely
on storage that may or may not be local to the server. Many legacy systems now rely
on remote storage resources provided by SANs. For these systems, the infrastructure
must provide most — if not all — of the data protection services. These systems can also
benefit from shared NVMe because it can provide the advanced data protection services
and scale expected by this class of applications. This provides an outstanding legacy
application storage platform as well.
What is even more advantageous is that shared NVMe has the potential to provide the
infrastructure for both emerging and legacy applications in a single storage solution.
Shared NVMe lets us deploy either storage-centric clusters or a compute-centric
deployment or both simultaneously as a compute and storage-capable infrastructure
within the same data center. Even when deployed at scale and in mixed environments,
we can centrally manage all our shared NVMe giving us complete visibility, control
and management regardless of where that shared NVMe is actually located within the
data center. We get the best of both: the management, uptime and services our legacy
applications require with the easy scale and performance that our emerging applications
The Future of Data Center Storage Is Today: Local and Shared Storage Benefits Converge
Micron's Answer to Shared NVMe
Micron is
innovating ahead
of the NVMe-oF
specification and
providing backward
compatibility with
our SolidScale
Because we need the benefits of shared NVMe to gain better insight into our data now,
we are innovating ahead of the NVMe-oF specification with backward compatibility.
Micron has developed a next-generation intelligent infrastructure platform designed to
provide applications with the performance they demand from local storage and all of the
flexibility, manageability and scalability typically experienced with traditional SAN-based
solutions. The Micron® SolidScale™ platform uses a low-latency, high-bandwidth RDMA
over Converged Ethernet (RoCE) fabric infrastructure to connect compute and storage
together in a flexible way that will fit almost any application architecture requirements.
Learn more at
We’ve seen real change in how we store our data. More than 20 years ago, we started
storing server data on spinning-disk HDDs inside individual application servers. We
added some advanced data services such as RAID, but additional data protection
services (like replication) were difficult to implement.
User demand for easier management and better data protection gave rise to the SAN.
With SANs, we centralized our storage so to manage it more easily, allowing us to see
disk, LUN and array level use as well as storage health information. We allocated this
centralized storage into logical volumes assigned to servers. We then shared the HDD
capacity with more servers, allowing us to amortize the cost of the storage. As SAN
adoption grew, and processing power increased, users demanded (and vendors added)
new advanced data management features such as data replication, deduplication and
thin provisioning.
SSDs brought unparalleled IOPS performance and low, consistent latency to the data
center. But these SSDs still used legacy protocols that were readily available. It did drive
SSD adoption, but performance bottlenecks prevented us from taking full advantage of
this new technology.
The Future of Data Center Storage Is Today: Local and Shared Storage Benefits Converge
NVMe was introduced as a new interface optimized for flash. With a very simple
command set and thousands of command queues — each of which can support
thousands of commands — the performance difference when compared to legacy
protocols had us going backwards: We moved storage back into the servers. New
applications emerged, designed to take advantage of this ultra-fast, low-latency, serverlocal storage, but we had a new generation of performance that was again locked up
inside the server.
With the introduction of the NVMe-oF specification, we can now share NVMe storage
across high-bandwidth, low-latency fabrics. It brings the best of local and shared
storage to our data centers. We can also support a broader range of applications, from
traditional to emerging, and we can tailor that storage to meet application-specific
requirements. We can use server-local NVMe and share that NVMe with other servers,
we can build centralized NVMe storage systems that act like a high-performance version
of a traditional SAN, or we can support both deployment models in the same data center
solution that can support legacy and emerging application needs for a truly bi-modal
data center. Micron’s answer to shared NVMe comes in the form of our new SolidScale
platform architecture, unlocking the potential of shared NVMe sooner than later.
About the Authors
We are data center storage practitioners working at Micron. We have experienced multiple architecture
generations and are looking to next-generation architectures to improve our own internal infrastructure as
well as to further our customers’ innovation.
Micron’s new SolidScale platform architecture unlocks the
potential of shared NVMe.
Products are warranted only to meet Micron’s production data sheet specifications. Products, programs and specifications are subject to change without
notice. Dates are estimates only. ©2017 Micron Technology, Inc. All rights reserved. All information herein is provided on an “AS IS” basis without
warranties of any kind. Micron, the Micron logo, SolidScale, and all other Micron trademarks are the property of Micron Technology, Inc. All other
trademarks are property of their respective owners. Rev. A 05/17 CCMMD-676576390-10719