Flash Memory Arrays in
Enterprise Applications
Ken Ow-Wing, Senior Product Line Manager
Violin Memory, Inc.
685 Clyde Ave, Mountain View, CA 94043
Office: 650-396-1603 Mobile: 415-608-7773
Santa Clara, CA
August 2011
1
Agenda
Enterprise Customer Requirements
New Product Category
Enterprise Use Cases
Business Benefits
Appendix
Economics
Array Characteristics
Santa Clara, CA
August 2011
2
Enterprise Environments:
Requirements








Flash Performance
Consistent Low Response Time
Reliability
Availability
Serviceability
Scalability
Manageability
Resource Utilization
Santa Clara, CA
August 2011
3
Evolution of Use of Flash
Flash Memory Array
Flash Memory Array
PURPOSE-BUILT ENTERPRISE SOLUTION
Networked/shared storage
Sustained R/W throughput
7x24x365 operation
3RD GENERATION
2ND GENERATION
1ST GENERATION
4
Workstation/Gaming
Memory extension/cache
Limitations for High End Data
Center Usage
Direct drive replacement
Cost sensitive
Limitations for High End
Data Center Usage
8/4/2011
Flash Memory Arrays
Flash Chips
4GB
Flash Package
32GB
10,400
Chips
Capacity
VIMMs
512GB
1344 Packages
Flash vRAID Group
Capacity Flash Systems
40TB in 3U
2560GB
84 memory modules
16 Groups
Data Center packaging reduces capital
cost, space, power and operations costs.
Infrastructure Consolidation
Flash Memory Array
5
Violin Memory Inc. Proprietary
8/4/2011
Flash Memory Storage – 2PB
Silicon Virtualized Data Center
8/4/2011
Flash Memory Summit, August 2011 Santa Clara, CA
6
Flash Memory Arrays
Available by
the rack
Fits in
Virtualized
Environments
Available as shelves
7
Flash Memory Summit, August 2011 Santa Clara, CA
8/4/2011
Database Appliance – 20,000 users
High Performance Database Solution for OLTP
Architecture View
512GB memory
15 TB max DB size
100M OLTP trans / hr
2 x 10GbE Switches
Production Server
Disk)
App/Test/Dev Server
• Fits in OEM system
Production Database
In Flash Memory Array
Software
Production Data Base
Storage mgmt
What’s Different about Flash Memory Arrays?
Compared to SSD
Difference
Benefit
* No support for rotating media
Optimum performance with flash
* Distributed Garbage Collection
Sustained Writes, no “Write Cliff”
* Purpose Built “vRAID” for Flash
Sustained Writes, no “R/M/W”
* vRAID not blocked by erasures
Significant Latency reduction
* vRAID protects flash devices
No replacement on flash failure
* Flash Packaging
Density > 10TB per RU
* Flash Memory Arrays are different from SSD and/or flash cards
8/4/2011
Flash Memory Summit, August 2011 Santa Clara, CA
9
Hardware Flash RAID
Failure Handling Result:
 Data rebuilt on same VIMM
 VIMM stays in service
 No data loss
 Increases MTBF 4X
1st
Purpose Built RAID for
Flash Memory Arrays
VIMM
RC ECC
A2
RC ECC
VIMM
RC ECC
A3
RC ECC
VIMM
RC ECC
A4
RC ECC
VIMM
FLASH
RC ECC
FLASH
VIMM
A1
FLASH
FLASH
B
FLASH
A
External
Hosts or
SAN
RC ECC
Ap
RC ECC
RC ECC
RC ECC
RC ECC
RC ECC
RC ECC
RC ECC
VIMM
VIMM
VIMM
VIMM
VIMM
RC ECC
RC ECC
RC ECC
B4
RC ECC
RC ECC
FLASH
B3
RC ECC
RC ECC
FLASH
B2
RC ECC
FLASH
FLASH
FLASH
B1
RC ECC
RC ECC
RC ECC
RC ECC
RAID Group 2
FLASH
RAID Controller(s)
RAID Group 1
VIMM
RC ECC
RC ECC
Spare VIMM
RC ECC
* Violin Intelligent Memory Module
8/4/2011
Violin Memory, Inc. Proprietary
User Data
RC ECC
Bp
RC ECC
RC ECC
RC ECC
Details – Example
 Flash chip fails (Red)
 vRAID rebuilds data on
same VIMM (Blue)
 Garbage collection avoided,
performance maintained
 Rebuilt data on extra NAND
 HW RAID in Controller
10
What’s Different about Flash Memory Arrays?
CCompared to PCIe Card with Flash
Difference
Benefit
* No support for rotating media
Optimum performance with flash
* Distributed Garbage Collection
Sustained Writes, no “Write Cliff”
* Purpose Built “vRAID” for Flash
Sustained Writes, no “R/M/W”
* vRAID not blocked by erasures
Significant Latency reduction
* vRAID protects flash devices
No replacement on flash failure
* Hot swappable components
No outage or data loss
* Shareability
Max utilization by many servers
* Scalability
Lg. dataset w/simplicity
* Flash Packaging
Density > 10TB per RU
* Flash Memory Arrays are different from SSD and/or flash cards
8/4/2011
Flash Memory Summit, August 2011 Santa Clara, CA
11
The Infamous SSD “Write Cliff”
The elephant in the room everyone tries to ignore
Empty
device
Pero
(their Datasheet numbers)
Pero
Real performance
12
Violin Memory, Inc Proprietary
Source: SC 2010
8/4/2011
Violin – Sustained performance
Sustained Performance
(Violin Datasheet number)
220,000+ IOPS
13
Violin Memory, Inc Proprietary
8/4/2011
Enterprise Use Cases
8/4/2011
Flash Memory Summit, August 2011 Santa Clara, CA
14
Tiered Storage 2.0
CAPACITY/RACK
1 PB
400 TB
Infrastructure Consolidation
Application Acceleration
DRAM-like Performance
HDD-like Density/Cost
Persistent block-storage
Storage-Cache Latency
Databases/Caches/Logs
File storage
1µs
8/4/2011
Violin Memory Inc. Proprietary
500µs
2ms
RESPONSE TIME (Access Delay)
SATA Array
ns
15K Disk Array
400µs
Processor Cache
SSDs
Emulating HDDs
150µs
Storage
Cache
NVRAM
Multi-core
CPU
Capacity Flash Arrays
1 TB
DRAM
10 TB
SLC Flash Arrays
100 TB
8ms
20ms
15
Transaction Processing
Co-exist with Legacy HDD Systems
Co-exist with Legacy HDD Systems
100’s of
servers
16
Flash Memory Summit, August 2011 Santa Clara, CA
Tape
Archive
146 GB 15K
600 GB 15K 2 TB SATA 15K
8/4/2011
Transaction Processing
Flash
Memory
Arrays
Move high
performance
transactions
to Flash
Memory
Arrays
17
Short-Stroked
146-600GB 15K
FC disk
400-600 GB
FC disk
High IOPs
Low Latency
>Server Utilization
> IOPs/sq. foot
OLTP
Flash Memory Summit, August 2011 Santa Clara, CA
400 to 600
GB FC disk
2-4 TB SATA/
SAS disk
DW/ODS
Nearline
60 GB
tape
Fully
Utilize
Disk
Capacity
8/4/2011
Archive
Multi-Tenancy
 Max Availability,Isolation,Utilization
Each customer
gets their own
partition
Big Host
Combine
containers:
Max HA & I/O
Or Container
Level Isolation
Little
Hosts
8/4/2011
Flash Memory Summit, August 2011 Santa Clara, CA
2 partitions
are HA with 2
PCI-E each
18
OLTP, DW, ODS
 Net Benefit: Analytics For Big Data
Big Host
OLTP
Data
Warehouse
(DW)
Data Marts
Operational Data
Store (ODS) Analytics
Little
Hosts
8/4/2011
Flash Memory Summit, August 2011 Santa Clara, CA
19
Extending the Use of Flash…..
Facilitates:
Movement to High
End Commercial
Data Center usage

Next evolutionary step
beyond capabilities of
SSD and Flash PCIe boards
Enablers:
Scalability
Share-ability
Manageability
I/O


Extend Benefits of Flash
beyond current
performance and latency
benefits

8/4/2011
Flash Memory Summit, August 2011 Santa Clara, CA
Sustained Writes
Hot Swap
HA
RAID
Fail-in-place
Remote mgmt.
Partitions

20
Manageability
SNMP
Interface - System
and network mgmt
Ex: HP NNM and IBM
Tivoli tools

Array mgmt


REST API
Remote Admin
Interface to
proprietary
provisioning systems
XLM interface to
management
systems
Single Web GUI &
CLI
XML API & SNMP
Email alerts
Single multi-PB
image


Wear mgmt
5 Year MLC lifetime
under std
maintenance
agreement
8/4/2011
Flash Memory Summit, August 2011 Santa Clara, CA
21
Business Benefits
8/4/2011
Flash Memory Summit, August 2011 Santa Clara, CA
22
Application Acceleration w/HP
OLTP Results
November, 2010
Total System Cost:
Transactions/Min
Price/Performance
$2,126,304
3,388,535
Processors/Cores
Database Manager
Operating System
8/64
Oracle Database
11g Rel 2
Enterprise
Oracle Linux Basic
($900,000 = Oracle SW)
Intel Xeon 2.26 GHz
$0.63
(per transaction
per minute)
Cost
Rack space
Power
Response time
TUXEDO 11gR1
+
=
HP ProLiant DL980 G7
Database Options:
Oracle 8/9/10/11/RAC
MS SQL Server
8/4/2011
Sybase + Others
70% Reductions
Open Architecture
Scales Linearly
$0.63 with Flash RAID
vs. $2.40 (Oracle Exadata 2)
or $1.01 without RAID
Flash Memory Array
(Oracle SuperCluster 2011)
Key Business Benefits
 Application Acceleration




Meet & Exceed SLAs
Before Flash Memory Array
Low CPU
Utilization
Simpler System Architectures
Deploy new apps faster
Reduce tuning costs
Server Farm
SAN/LAN
 Infrastructure Consolidation
 Reduce CapEx and OpEX




Fewer Spindles, licenses, servers
Less Power, space, service
Leverage existing infrastructure
Enable Virtualization
After Flash Memory Array
High CPU
Utilization
Server Farm
SAN/LAN
 Lower $ per Application
Flash Memory Array
8/4/2011
Flash Memory Summit, August 2011 Santa Clara, CA
24
Data Center Transformation
“The transition from spinning to solid-state
storage is already underway.”
Steve O’Donnell, ESG
25
8/4/2011
Flash Memory Summit, August 2011 Santa Clara, CA
 Resource utilization
 OpEx Reduction
 Reliability
 Availability
 Serviceability
 Power
 Space
 Cooling
Key Take Always
 Flash Memory Arrays:
 Suitable for High End Enterprise Applications
 Meet Enterprise Application requirements**
**Summary of requirements:
Flash Performance
Reliability,
Serviceability
Manageability
8/4/2011
Flash Memory Summit, August 2011 Santa Clara, CA
Consistent low response time
Availability,
Scalability
Resource Utilization
26
Appendix
8/4/2011
Flash Memory Summit, August 2011 Santa Clara, CA
27
Flash Memory Array Characteristics
Category
Characteristic (8 racks)
Uses
Scalability*
2 + PB
Large Active Data Sets
IOPS**
64,000,000
Migrate from short-stroked
15K FC HDD
Bandwidth**
400 GB/sec read
256 GB/sec write
Excellent ingest and data
distribution
Latency
25 µs write
75 µs read
Max server utilization
Availability
HA and RAID
High end applications
Manageability
XLM/SNMP interfaces
High end applications
Protocols
FC, iSCSI, IB (Q3), NFS
Multiple environments
I/O
(512) 8 Gbit FC ports or
(512) 10 GbE ports
(64) 40 GB/sec IB ports (Q3)
Max resource utilization
* Raw ** Theoretical
8/4/2011
Flash Memory Summit, August 2011 Santa Clara, CA
28
Compelling Economics
Performance
Per Rack
IOPS
Latency
Flash Memory
Arrays
Conventional
HDD Arrays
2,000,000*
24,000
5000 µsec
200 µsec
HDD/SDD
Combination
40,000
2000 µsec
Cost per
Application
Flash Memory
Arrays
SATA/SAS
FC
$/IOPS (4K)
$1.00
$17.00
$20.00
Cost per GB
Flash
Flash
Memory
Arrays
RAID-1 SSDs in
Array
PCIe Flash in
Mirrored Systems
$/GB with RAID
$22.00
$100 - $200
$60.00
* Based on one rack with 8 memory arrays
8/4/2011
29
Flash Memory Summit, August 2011 Santa Clara, CA
Flagship Customer
600+ Terabytes and counting
Problem: ORACLE Ad Server Reporting only met 8 hour SLA twice in 6 months
Goal: consistent sustainable IO performance to meet SLA under EMC’s Enterprise
Storage management tools
Result: On Violin Arrays without any tuning, haven’t missed SLAs
AOL is now able to further enhance their Ad Campaign Reporting
 Reinforcing what works, pruning what doesn’t
 Potential for positive revenue impact going forward
AOL was one of EMC’s VPLEX key launch customers
• Global production prior to official launch by EMC
• Significant amount of VPLEX support matrix was validated @ AOL
• Violin 3200 Memory Array certified under EMC VPLEX

Winning combination of consistent sustainable performance under world-class enterprise management
system
• VPLEX certification enables Violin’s products to be seamlessly used in EMC
environments
30
Violin Memory, Inc Proprietary
8/4/2011
HP & Microsoft - Best of Breed
TPC-E Blade server world Record – June 2010
 This is the first use of non-HP storage in an HP TPC benchmark
 Flash Memory Arrays only operating at 35% utilization
 Other HP benchmarks due shortly

The TPC-E benchmark simulates the OLTP workload of a brokerage firm. The focus of the benchmark is the central
database that executes transactions related to the firm’s customer accounts. Although the underlying business
model of TPC-E is a brokerage firm, the database schema, data population, transactions, and implementation rules
have been designed to be broadly representative of modern OLTP systems.
8/4/2011
CONFIDENTIAL
31
Thank You
8/4/2011
Flash Memory Summit, August 2011 Santa Clara, CA
32
Download PDF
Similar pages