Solution-specific Evaluation of SSD Write Endurance
White Paper  Solution-specific Evaluation of SSD Write Endurance
White Paper
Fujitsu PRIMERGY Servers
Solution-specific Evaluation of SSD Write Endurance
This paper describes how to determine an appropriate SSD quality class based upon the
current disk drive workload.
Version
1.1a
2014-12-18
http://ts.fujitsu.com/primergy
Page 1 (16)
White Paper  Solution-specific Evaluation of SSD Write Endurance
Version: 1.1a  2014-12-18
Contents
Document history ................................................................................................................................................ 2
Overview ............................................................................................................................................................. 3
How to Measure the Disk’s Write Load .............................................................................................................. 4
Windows Measurements ................................................................................................................................. 4
Linux Measurements ....................................................................................................................................... 5
ESX Server Measurements ............................................................................................................................ 6
Normalization .................................................................................................................................................. 7
Mapping to the Target Architecture .................................................................................................................... 8
Non-RAID ........................................................................................................................................................ 8
RAID ................................................................................................................................................................ 8
RAID 0 (striping) .............................................................................................................................................. 8
RAID 1 (mirroring) ........................................................................................................................................... 9
RAID 5 (block-level striping with distributed parity) ........................................................................................ 9
RAID 6 (block-level striping with 2 distributed parities) ................................................................................. 10
RAID 10 (mirrored striping array) .................................................................................................................. 10
RAID 1E (striped mirroring) ........................................................................................................................... 11
RAID 50 (striped array striped across RAID 5) ............................................................................................. 11
RAID 60 (striped RAID 6 sets) ...................................................................................................................... 12
Estimate the Required Write Endurance .......................................................................................................... 13
Cumulate Logical Drive’s Contribution .......................................................................................................... 13
Convert into Devices Writes per Day ............................................................................................................ 13
Literature ........................................................................................................................................................... 16
Contact ............................................................................................................................................................. 16
Document history
Version 1.0
Initial version
Version 1.0a
Minor corrections
Version 1.1


Endurance class tables updated
Table of SSD models updated
Version 1.1a
Windows measurement tool fsutil.exe (unexpected values) replaced with WMI class
Win32_PerfRawData_PerfDisk_PhysicalDisk
Page 2 (16)
http://ts.fujitsu.com/primergy
White Paper  Solution-specific Evaluation of SSD Write Endurance
Version: 1.1a  2014-12-18
Overview
Solid State Drives are becoming more and more attractive for usage in servers due to their high (random) I/O
performance, low power consumption, zero noise emission, mechanical robustness, enterprise reliability. At
the same time they undergo a continuous price drop.
In contrast to traditional hard disk drives with rotating magnetic media, which offer a theoretical unlimited
number of write cycles, an SSD will “wear out” as it is based on flash memory devices which have only a
limited number of program / erase (PE) cycles specified by the flash component vendor. The number of PE
cycles specified by the NAND vendor depends on parameters like the silicon manufacturing process, the
NAND lithography and the number of bits stored in a single flash cell.
The most important feature to enhance the SSD’s endurance is a firmware feature of the SSD called
Dynamic Wear Leveling, which assures that the aging of the individual NAND flash chips in the array is
balanced for the total capacity of the SSD.
A very important criteria when selecting the proper SSD technology for a given application is the Write
Endurance which typically is specified in units of device-writes-per-day (DWPD) over a given warranty
period.
To determine the right SSD technology, it is most important to match the claimed endurance of an SSD
device against the application workload.
The following table shows typical DWPD values for different Endurance Classes defined by Fujitsu
Technology Solutions (FTS):
DWPD
> approx. 15
often approx. 50
approx. 5 – approx. 15
usually approx. 10
< approx. 5
usually < 0.3
Endurance Class
Application
High Endurance
highest workloads with high write loads
Mainstream Endurance OLTP type IO loads / mixed workloads
Value Endurance
read intensive / booting apps
This paper describes means to determine the current disk write load which may be used to assess the SSD’s
required Write Endurance.
http://ts.fujitsu.com/primergy
Page 3 (16)
White Paper  Solution-specific Evaluation of SSD Write Endurance
Version: 1.1a  2014-12-18
How to Measure the Disk’s Write Load
Most operating systems already provide performance counters that can be used to determine the number of
write accesses and the amount of data written to a disk volume. The current chapter describes means to
determine the write load of logical drives.
Note: As solid state media has a significantly different access speed, replacing rotating media with solid state
media will invariably change the distribution of load within the system, so that a simple projection of the
measurement values will not yield accurate results. The proposed methods can therefore only give rough
estimates of the current workload.
Windows Measurements
Use the Windows Management Instrumentation (WMI) class Win32_PerfRawData_PerfDisk_PhysicalDisk
to monitor disk write counters.
The Win32_PerfRawData_PerfDisk_PhysicalDisk class has these types of members:
Field Name
Unit
Description
AvgDiskBytesPerRead
uint64
Average number of bytes transferred from the disk during read operations.
AvgDiskBytesPerRead_Base
uint32
Base value for AvgDiskBytesPerRead. This value represents the
accumulated number of operations that occurred.
…
…
AvgDiskBytesPerWrite
uint64
Average number of bytes transferred to the disk during write
operations.
AvgDiskBytesPerWrite_Base
uint64
Base value for AvgDiskBytesPerTransfer. This value represents the
accumulated number of operations that occurred.
…
…
Example: PowerShell script to measure “WrittenBytes” to a physical drive
function GetBytesWritten ($DriveLetter) {
$BytesWritten = 0
$WMIObjects = Get-WmiObject -namespace root\cimv2 -class `
Win32_PerfRawData_PerfDisk_PhysicalDisk -computername .
foreach ($obj in $WMIObjects) {
if ($obj.Name -match $DriveLetter) {
$BytesWritten = $obj.AvgDiskBytesPerWrite
break
}
}
$BytesWritten
}
$StartMeasure = GetBytesWritten ("D:")
Start-Sleep -Seconds 86400
# Measure one day / 24 hours
$EndMeasure = GetBytesWritten ("D:")
write-host WrittenBytes: ($EndMeasure - $StartMeasure)
Note that these counters are reset whenever the volume is added to the system, e.g. at boot time.
Page 4 (16)
http://ts.fujitsu.com/primergy
White Paper  Solution-specific Evaluation of SSD Write Endurance
Version: 1.1a  2014-12-18
Linux Measurements
Reading the /sys/block/<dev>/stat pseudo-file returns a single line with 11 values:
Field Name
Unit
Description
read I/Os
requests
number of read I/Os processed
read merges
requests
number of read I/Os merged with in-queue I/O
read sectors
sectors
number of sectors read
read ticks
milliseconds
total wait time for read requests
write I/Os
requests
number of write I/Os processed
write merges
requests
number of write I/Os merged with in-queue I/O
write sectors
sectors
number of sectors written
write ticks
milliseconds
total wait time for write requests
in_flight
requests
number of I/Os currently in flight
io_ticks
milliseconds
total time this block device has been active
time_in_queue
milliseconds
total wait time for all requests
th
The 7 value – the write sectors field - is of relevance here. Multiply this value by the sector size of the disk
(obtained by reading /sys/block/<dev>/queue/hw_sector_size) to determine the number of bytes written:
WrittenBytes = write sectors × hw_sector_size
Note that these counters are reset whenever the device is added to the system, e.g. at boot time.
Example:
The command cat /sys/block/sdb/stat returns
> cat /sys/block/sdb/stat
9217626
172227 1456405218 18255726
0 8772930 47653606
263808
8011394 66312632 29393723
th
The 7 field ( 66312632 ) gives the number of sectors written:
write sectors = 66312632
Get the corresponding hw_sector_size:
> cat /sys/block/sdb/queue/hw_sector_size
512
The hw_sector_size is
hw_sector_size = 512
The number of written bytes is thus
WrittenBytes = 66312632 × 512 = 33,952,067,584
http://ts.fujitsu.com/primergy
Page 5 (16)
White Paper  Solution-specific Evaluation of SSD Write Endurance
Version: 1.1a  2014-12-18
ESX Server Measurements
Use the command line tool esxcli.
The command esxcli storage core device stats get –d < device name> returns 7 fields
Field Name
Description
Device
device name
Successful Commands
number of successful commands
Blocks Read
number of blocks read
Blocks Written
number of blocks written
Read Operations
number of Read Operations
Write Operations
number of Write Operations
Reserve Operations
number of Reserve Operations
The field Blocks Written contains the number of blocks written.
WrittenBytes = BlocksWritten × <blocksize>
With the standard block size of 512 Bytes
WrittenBytes = BlocksWritten × 512
Note that these counters are reset whenever the device is added to the system, e.g. at boot time.
The device name can be obtained by esxcli storage core path list. This command returns a list of all disk
devices.
Example:
The command esxcli storage core path list returns a list of all disk devices
> esxcli storage core path list
...
hostname.vmhba1-hostname.2:0-naa.6003005700ebd3f016bc34ab0d2446b7
UID: unknown.vmhba1-unknown.2:0-naa.6003005700ebd3f016bc34ab0d2446b7
Runtime Name: vmhba1:C2:T0:L0
Device: naa.6003005700ebd3f016bc34ab0d2446b7
Device Display Name: Local LSI Disk (naa.6003005700ebd3f016bc34ab0d2446b7)
...
Choose the appropriate disk device and get the device name
(e.g. naa.6003005700ebd3f016bc34ab0d2446b7)
esxcli storage core device stats get –d naa.6003005700ebd3f016bc34ab0d2446b7 returns
> esxcli storage core device stats get –d naa.6003005700ebd3f016bc34ab0d2446b7
naa.6003005700ebd3f016bc34ab0d2446b7
Device: naa.6003005700ebd3f016bc34ab0d2446b7
Successful Commands: 3921233
Blocks Read: 438196025
Blocks Written: 47685869
Read Operations: 3485300
Write Operations: 402863
Reserve Operations: 7740
...
Page 6 (16)
http://ts.fujitsu.com/primergy
White Paper  Solution-specific Evaluation of SSD Write Endurance
Version: 1.1a  2014-12-18
The field Blocks Written contains the number of written blocks since the last reboot. Multiply this value by the
standard block size of 512 Bytes to get the number of written bytes:
WrittenBytes = 47685869 × 512 = 24,415,164,928
Normalization
Windows as well as Linux and ESX report the number of written bytes since the last reboot. Therefore two
measurements have to be made: one at the start and one at the end of the desired sampling period. Proceed
as follows:
1.
2.
3.
4.
5.
start your write load tests
read the number of written bytes (“count1”), record the start time
let your test run for a defined amount of time (“measurement time”)
read the number of written bytes (“count2”), record the end time
stop your write load tests
The number of written bytes within the measurement time is the difference of count2 and count1
WrittenBytes = count2 – count1
The average WriteLoad per second is
WriteLoad [bytes/s] = WrittenBytes / <measurement time>
Period
Seconds
1 hour
3,600
1 day
86,400
1 week
1 month ( 1/12 year)
1 year (365 days)
604,800
2,628,000
31,536,000
Example:
A 1.5 hours (= 5400 seconds) test run results in
count1:
103,952,067,584
count2:
127,845,953,412
WrittenBytes
= count2 – count1
= 23,893,885,828
WriteLoad
= WrittenBytes / <measurement time>
= 23893885828 / 5400
= 4,424,794
[bytes/s]
= ~4.4
[MB/s]
The following examples all assume these 4.4 MB/s write load.
http://ts.fujitsu.com/primergy
Page 7 (16)
White Paper  Solution-specific Evaluation of SSD Write Endurance
Version: 1.1a  2014-12-18
Mapping to the Target Architecture
Depending on the SSD’s target architecture the logical drive’s write load has to be mapped to the physical
drive’s expected write load. This chapter provides some help in deriving the write load of physical disks
which are e.g. parts of RAID volumes.
Non-RAID
If the logical drive refers to a non-RAID drive, then the values determined in the previous chapters can be
used directly.
WriteLoadPhyDriv = WriteLoadLogDriv
Example:
A logical drive’s WriteLoad of 4.4 MB/s was measured:
WriteLoadPhyDriv
= 4.4
[MB/s]
RAID
If RAID controllers are employed, these values refer to logical disks as defined by the RAID controller
providing the logical disk. If the organization of physical disks to logical disks in the RAID controller is known,
a rough estimate of the write accesses to physical disks can be determined by taking into account the RAID
properties. As a physical drive may belong to more than one logical drive and a logical drive may be
distributed over several physical drives, a (cumulative) write access counter must be calculated for each
physical drive. The share of write load it receives from the logical drives must be added to these counters.
The following list assumes that

logical disks are evenly distributed over physical disks, i.e. statistically each write access to a logical
disk has an equal chance of causing a write access to one of the physical disks, and

write accesses are evenly distributed over logical disks, i.e. each block of a logical disk has an equal
chance of being written to.
RAID 0 (striping)
The logical drive’s WriteLoad divided by the number of physical drives (#Drives) is added to each physical
drive's total WriteLoad.
WriteLoadPhyDriv = WriteLoadLogDriv / #Drives
Example:
A RAID 0 consisting of 4 drives ( #Drives = 4 ):
WriteLoadPhyDriv0..3
Page 8 (16)
= 4.4 / 4
= 1.1
[MB/s]
[MB/s]
http://ts.fujitsu.com/primergy
White Paper  Solution-specific Evaluation of SSD Write Endurance
Version: 1.1a  2014-12-18
RAID 1 (mirroring)
As all data is copied to both disks the logical drive’s WriteLoad is added to each physical drive's total
WriteLoad.
WriteLoadPhyDriv = WriteLoadLogDriv
Example:
A RAID 1 consisting of 2 drives:
WriteLoadPhyDriv0,1
= 4.4
[MB/s]
RAID 5 (block-level striping with distributed parity)
The logical drive’s WriteLoad is multiplied by 2 (one write access for the data block and one write access for
the parity block) and then divided by the number of drives. The result is then added to each physical drive's
total WriteLoad.
WriteLoadPhyDriv = WriteLoadLogDriv × 2 / #Drives
Example:
A RAID 5 consisting of 5 drives:
WriteLoadPhyDriv0..4
= 4.4 × 2 / 5
= 1.76
http://ts.fujitsu.com/primergy
[MB/s]
[MB/s]
Page 9 (16)
White Paper  Solution-specific Evaluation of SSD Write Endurance
Version: 1.1a  2014-12-18
RAID 6 (block-level striping with 2 distributed parities)
This RAID level is similar to RAID 5 but with 2 parity blocks. The logical drive’s WriteLoad is therefore
multiplied by 3 (one write access for the data block and two write accesses for the parity blocks) and then
divided by the number of drives. The result is then added to each physical drive's total WriteLoad.
WriteLoadPhyDriv = WriteLoadLogDriv × 3 / #Drives
Example:
A RAID 6 consisting of 5 drives:
WriteLoadPhyDriv0..4
= 4.4 × 3 / 5
= 2.64
[MB/s]
[MB/s]
RAID 10 (mirrored striping array)
As all stripe set data is copied to 2 disks each the logical drive’s WriteLoad has to be doubled first. The
logical drive’s WriteLoad is first multiplied by 2 and then divided by the number of drives. The result is added
to each drive in the entire RAID 10 set.
WriteLoadPhyDriv = WriteLoadLogDriv × 2 / #Drives
Example:
A RAID 10 build of 2 RAID 1 sets with 2 disks each (4 disks in total):
WriteLoadPhyDriv0..1,2..3 = 4.4 × 2 / 4 [MB/s]
= 2.2
[MB/s]
Page 10 (16)
http://ts.fujitsu.com/primergy
White Paper  Solution-specific Evaluation of SSD Write Endurance
Version: 1.1a  2014-12-18
RAID 1E (striped mirroring)
As all stripe set data is copied to 2 disks each the logical drive’s WriteLoad has to be doubled first. The
WriteLoad is first multiplied by 2 and then divided by the number of drives. The result is added to each
drive’s total WriteLoad in the entire RAID 1E set.
WriteLoadPhyDriv = WriteLoadLogDriv × 2 / #Drives
Example:
A RAID 1E consisting of 3 drives:
WriteLoadPhyDriv0..2
= 4.4 × 2 / 3
= ~2.93
[MB/s]
[MB/s]
RAID 50 (striped array striped across RAID 5)
This RAID level is a combination of RAID 5 with RAID 0. The logical drive’s WriteLoad is multiplied by 2 (one
write access for the data block and one write access for the parity block) and then divided by the number of
drives. The result is added to each physical drive's total WriteLoad.
WriteLoadPhyDriv = WriteLoadLogDriv × 2 / #Drives
Example:
A RAID 50 build of 2 RAID 5 sets with 3 disks each (6 disks in total):
WriteLoadPhyDriv0..2,3..5 = 4.4 × 2 / 6 [MB/s]
= ~1.47
[MB/s]
http://ts.fujitsu.com/primergy
Page 11 (16)
White Paper  Solution-specific Evaluation of SSD Write Endurance
Version: 1.1a  2014-12-18
RAID 60 (striped RAID 6 sets)
This RAID level is a combination of RAID 6 with RAID 0. The logical drive’s WriteLoad is multiplied by 3 (one
write access for the data block and two write accesses for the parity blocks) and then divided by the number
of drives. The result is added to each physical drive's total WriteLoad.
WriteLoadPhyDriv = WriteLoadLogDriv × 3 / #Drives
Example:
A RAID 60 build of 2 RAID 6 sets with 5 disks each (10 disks in total):
WriteLoadPhyDriv0..4,5..9
Page 12 (16)
= 4.4 × 3 / 10 [MB/s]
= 1.32
[MB/s]
http://ts.fujitsu.com/primergy
White Paper  Solution-specific Evaluation of SSD Write Endurance
Version: 1.1a  2014-12-18
Estimate the Required Write Endurance
A very important criteria when selecting the proper SSD technology for a given application is the Write
Endurance which typically is specified in units of device-writes-per-day (DWPD) over a given warranty
period. This chapter describes how to compute the estimated WriteEndurance from the measured
WriteLoads.
Cumulate Logical Drive’s Contribution
As a physical drive may belong to more than one logical drive, a (cumulative) write access counter must be
calculated for each physical drive. The share of write load it receives from each logical drive must be added
to these counters.
To get the physical disk’s total WriteLoad sum up all logical drive’s contributions:
WriteLoadtotal =
 WriteLoad
contribution1..N
Example:
Three logical drives (diskA, diskB, diskC), configured as RAID 5 each, share the same 5 physical drives. The
corresponding WriteLoads has been estimated as
diskA: WriteLoadPhyDriv0..4
= 4.4 [MB/s]
diskB: WriteLoadPhyDriv0..4
= 0.7 [MB/s]
diskC: WriteLoadPhyDriv0..4
= 1.2 [MB/s]
Summing up these values yields a cumulative WriteLoad of
total: WriteLoadPhyDriv0..4
= 4.4 + 0.7 + 1.2
= 6.3
[MB/s]
[MB/s]
Convert into Devices Writes per Day
The WriteEndurance is typically specified in units of device-writes-per-day (DWPD). It is defined as the
amount of writing (PetaBytesWritten) that can be sustained over the entire product lifetime (DaysPerLife)
normalized to the drive’s capacity (Capacity):
DWPD = ( PetaBytesWritten / DaysPerLife ) / Capacity
Background:
With a given flash device endurance (FlashEndurance, in number of program/erase cycles) and a product
lifetime DaysPerLife the WriteEndurance (DWPD) can be calculated to
DWPD = FlashEndurance × (1+OP) / ( DaysPerLife × WA )
with
OP (Over Provisioning):
The amount of spare cells in %, expressed as the ratio of flash’s physical capacity to the declared
logical capacity minus one. Typical values are in the range of 30% to 150%.
by
WA (Write Amplification):
A measure of the extra writing required for garbage collection and other flash management
functions, expressed as a ratio of the amount of data written to flash to the amount of data written
the host. Typical values are in the range of 1.1 (sequential write) up to 3.3 (random write).
The parameter WriteEndurance (DWPD) integrates these technology dependent values into one single
device specific value.
http://ts.fujitsu.com/primergy
Page 13 (16)
White Paper  Solution-specific Evaluation of SSD Write Endurance
Version: 1.1a  2014-12-18
With given WriteLoad and device Capacity the required WriteEndurance is:
WriteEndurance = WriteLoad × <seconds-per-day> / Capacity
This result can be used to select an appropriate Endurance Class for your application.
Example:
100 GB SSDs are to be used as physical drives. The cumulative WriteLoad was estimated as 6.3 MB/s.
WriteEndurance
= 6.3 [MB/s] × 86400 [s] / 100 [GB]
= 6.3 [MB/s] × 86400 [s] / (100 × 1000 [MB])
= ~5.4 [DWPD]
According to Fujitsu Technology Solutions’ Endurance Class definition
Endurance Class
Value Endurance
(Read-intensive)
DWPD
Description/Suitability
< approx. 5
usually < 0.3
SSDs of the lower price category, predominantly with MLC flash
memory.
Over a period of 5 years these usually manage an average write
load of below 3 MiB/s. They are suited to load situations that are
characterized by a low write intensity.
Examples: System drive, streaming services
Mainstream Endurance approx. 5 – approx. 15
usually approx. 10
SSDs of the medium price category, PCIe-SSDs with their high
prices due to exceptional performance and DOM, predominantly
with MLC flash memory.
Over a period of 5 years these usually manage an average write
load in the double-digit MiB/s range. They are suited to load
situations that are characterized by a moderate write intensity.
Examples (SSDs): File servers, web servers
Examples (PCIe-SSDs): Virtual servers, databases, mail servers
Examples (DOM): System drive
High Endurance
SSDs of the upper price category, predominantly with SLC flash
memory.
Over a period of 5 years these usually manage an average write
load in the two to three-digit MiB/s range. They are suited to load
situations that are characterized by a high write intensity.
Examples: Virtual servers, databases, mail servers
> approx. 15
often approx. 50
an SSD of the “Main Stream Endurance” class should be sufficient.
Hint:
The required WriteEndurance depends on the WriteLoad of the physical drives. For RAID systems this
drive load can be reduced by a larger number of (smaller) physical drives.
Example:
A RAID 5 consisting of 3 drives yields:
WriteLoadPhyDriv0..2
= 4.4 × 2 / 3
= ~2.9
[MB/s]
[MB/s]
Using 5 (half sized) drives yields:
WriteLoadPhyDriv0..4
= 4.4 × 2 / 5
= 1.76
[MB/s]
[MB/s]
Page 14 (16)
http://ts.fujitsu.com/primergy
White Paper  Solution-specific Evaluation of SSD Write Endurance
Version: 1.1a  2014-12-18
Form factor
PCIe Gen2 x4 1200
17
7
103
Low profile
PCIe-SSD 785GB MLC
PCIe Gen2 x4
785
11
7
67
Low profile
PCIe-SSD 640GB MLC )
PCIe Gen1 x4
640
10
8
60
Low profile
PCIe-SSD 365GB MLC
PCIe Gen2 x4
365
4
6
24
Low profile
PCIe-SSD 320GB MLC )
PCIe Gen1 x4
320
4
6
24
Low profile
SSD SAS 12G 1.6TB Main 2.5'' H-P EP
SAS 12G
1600
29.2
10
176
2.5"
SSD SAS 12G 800GB Main 2.5'' H-P EP
SAS 12G
800
14.6
10
88
2.5"
2
SSD SAS 12G 400GB Main 2.5'' H-P EP
SAS 12G
400
7.3
10
44
2.5"
SSD SAS 12G 200GB Main 2.5'' H-P EP
SAS 12G
200
3.65
10
22
2.5"
SAS 6G
400
47
212
2.5"
SSD SAS 6G 400GB MLC HOT PL 2.5" EP PERF ) SAS 6G
400
10
45
2.5"
49
109
2.5"
2
SSD SAS 6G 400GB SLC HOT P 2.5" EP PERF )
2
2
SSD SAS 6G 200GB SLC HOT P 2.5" EP PERF )
SSD SAS 6G 200GB MLC HOT PL 2.5" EP PERF ) SAS 6G
200
3.75
10
23
2.5"
SAS 6G
100
9
49
54
2.5"
SSD SAS 6G 100GB MLC HOT PL 2.5" EP PERF ) SAS 6G
100
1.875 10
11
2.5"
SSD SATA 6G 800GB Main 2.5'' H-P EP
SATA 6G
800
14.6
10
88
2.5"
SATA 6G
400
7.3
10
44
2.5"
SSD SATA 6G 400GB MLC HOT P 2.5“ EP MAIN )
SATA 6G
400
7.5
10
45
2.5“
SSD SATA 6G 200GB Main 2.5'' H-P EP
SATA 6G
200
3.65
10
22
2.5"
SSD SATA 6G 200GB MLC HOT P 2.5" EP MAIN )
SATA 6G
200
3.75
10
23
2.5"
SSD SATA 6G 100GB Main 2.5'' H-P EP
SATA 6G
100
1.825 10
11
2.5"
SATA 6G
100
1.875 10
11
2.5"
2
SATA 3G
64
2
17
12
2.5“
2
SSD SATA 3G 32GB SLC HOT P 2.5“ EP MAIN )
SATA 3G
32
1
17
6
2.5“
DOM SATA 3G 64GB Main N H-P
SATA 3G
64
1.6425 14
9
DOM )
SSD SAS 6G 100GB SLC HOT P 2.5" EP PERF )
2
SSD SATA 6G 400GB Main 2.5'' H-P EP
2
2
2
SSD SATA 6G 100GB MLC HOT P 2.5" EP MAIN )
SSD SATA 3G 64GB SLC HOT P 2.5“ EP MAIN )
1
PBW = Petabytes Written
2
EOL
3
DOM: Disk on module
)
7.5
200
2
)
35
SAS 6G
2
)
DWPD (rounded down)
1
2
Capacity [GB]
Interface
Name
1
PCIe-SSD 1.2TB MLC
Write Endurance [PBW] )
MiB/s until PBW ) is reached
SSD models that are available for PRIMERGY servers with their DWPD values:
http://ts.fujitsu.com/primergy
18
3
Page 15 (16)
White Paper  Solution-specific Evaluation of SSD Write Endurance
Version: 1.1a  2014-12-18
Literature
PRIMERGY Systems
http://primergy.com/
PRIMERGY Components
This White Paper:
http://docs.ts.fujitsu.com/dl.aspx?id=3c0db973-3406-476c-a301-1b1ed3b1a9ad
http://docs.ts.fujitsu.com/dl.aspx?id=8325f1fc-f98c-4467-a3aa-98f6c633143f
Solid State Drives – FAQ:
http://docs.ts.fujitsu.com/dl.aspx?id=78858d6c-4c0f-479a-8ceb-705fe1938f4e
RAID Controller Performance
http://docs.ts.fujitsu.com/dl.aspx?id=e2489893-cab7-44f6-bff2-7aeea97c5aef
PRIMERGY Performance
http://www.fujitsu.com/fts/products/computing/servers/primergy/benchmarks/
PRIMERGY and Operating Systems
http://docs.ts.fujitsu.com/dl.aspx?id=d4ebd846-aa0c-478b-8f58-4cfbf3230473
Contact
FUJITSU
Website: http://www.fujitsu.com/
PRIMERGY Product Marketing
mailto:Primergy-PM@ts.fujitsu.com
© Copyright 2013-2014 Fujitsu Technology Solutions. Fujitsu and the Fujitsu logo are trademarks or registered trademarks of Fujitsu Limited in Japan and
other countries. Other company, product and service names may be trademarks or registered trademarks of their respective owners. Technical data subject
to modification and delivery subject to availability. Any liability that the data and illustrations are complete, actual or correct is excluded. Designations may
be trademarks and/or copyrights of the respective manufacturer, the use of which by third parties for their own purposes may infringe the rights of such
owner.
For further information see http://www.fujitsu.com/fts/resources/navigation/terms-of-use.html
2014-12-18 WW EN
Page 16 (16)
http://ts.fujitsu.com/primergy
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertising