Improving Oracle Database performance with HPE Persistent

Improving Oracle Database performance with HPE Persistent
Improving Oracle Database
performance with HPE Persistent
Memory on HPE ProLiant DL380 Gen9
Technical white paper
Technical white paper
Contents
Executive summary .................................................................................................................................................................................................................................................................................................................................3
Solution overview ......................................................................................................................................................................................................................................................................................................................................3
HPE Persistent Memory ...............................................................................................................................................................................................................................................................................................................3
HPE Linux SDK for NVDIMM ................................................................................................................................................................................................................................................................................................. 4
Solution components............................................................................................................................................................................................................................................................................................................................ 4
Software ..................................................................................................................................................................................................................................................................................................................................................... 5
Best practices and configuration guidance for the Oracle Database solution with NVDIMM ........................................................................................................................................... 5
Capacity and sizing ................................................................................................................................................................................................................................................................................................................................ 6
Workload description ..................................................................................................................................................................................................................................................................................................................... 6
Workload results ................................................................................................................................................................................................................................................................................................................................. 7
Analysis and recommendations ...........................................................................................................................................................................................................................................................................................9
Summary ........................................................................................................................................................................................................................................................................................................................................................10
Implementing a proof-of-concept...........................................................................................................................................................................................................................................................................................10
Appendix A: Bill of materials ......................................................................................................................................................................................................................................................................................................... 11
Appendix B: Memory configuration ...................................................................................................................................................................................................................................................................................... 12
Appendix C: Oracle configuration parameters ............................................................................................................................................................................................................................................................ 12
Appendix D: Linux kernel configuration ........................................................................................................................................................................................................................................................................... 12
Appendix E: udev rules ..................................................................................................................................................................................................................................................................................................................... 13
Appendix F: Create redundant redo logs ......................................................................................................................................................................................................................................................................... 13
Resources and additional links ..................................................................................................................................................................................................................................................................................................14
Technical white paper
Page 3
Executive summary
The demands of database implementations continue to escalate: faster transaction processing speeds, scalable capacity, and increased flexibility
are required to meet the needs of today’s business. At the same time, enterprises are looking for cost-effective, open-architecture, industry
standard solutions that don’t include vendor lock-in or carry the high price tag attached to proprietary solutions.
HPE Persistent Memory products deliver both the performance of memory and the persistence of traditional storage. Customers face increasing
pressure to make faster business decisions. The HPE Persistent Memory module delivers outstanding performance to put data to work more
quickly in your business. It is ideal for accelerating database and analytics workloads
This Reference Configuration will demonstrate the ease of configuring HPE 8GB NVDIMM (Non-Volatile DIMM) modules along with the resulting
performance gains achievable in an Oracle Database environment. The benefits include significantly improved throughput, better resource
utilization, and reduced Oracle licensing costs. Oracle Database throughput increased by a factor of two to four times when using NVDIMMs as
compared to 15K RPM SAS drives for Oracle redo logs. In addition, using NVDIMMs for Oracle redo logs was three times more cost effective than
an equivalent number of SSDs, and a factor of two to four times faster than 15K RPM SAS drives with only a 40% increase in price.
Target audience: This Hewlett Packard Enterprise white paper is designed for IT professionals and database administrators who use, program,
manage, or administer large databases that require high availability and high performance. Specifically, this information is intended for those who
evaluate, recommend, or design new high performance architectures to support mission critical databases.
This white paper describes testing completed in May 2016.
Document purpose: The purpose of this document is to describe a Reference Configuration, highlighting recognizable benefits to technical
audiences.
Note
This testing made use of the kernel and libraries provided with the HPE Linux® SDK for NVDIMM. The SDK is not for production use. The intent
of this paper is to provide a proof point of the performance gains possible when using NVDIMM technology with an Oracle database.
Solution overview
HPE Persistent Memory
To gain a real competitive advantage, you need to enable faster business decisions. The HPE Persistent Memory module delivers outstanding
performance to put data to work more quickly in your business. HPE Persistent Memory offerings are not just new hardware technology, but a
complete software ecosystem designed to work with today’s applications and workloads, including databases and analytics workloads.
The HPE 8GB NVDIMM module is the first offering in the HPE Persistent Memory product category. It delivers the performance of memory with
the resiliency you have come to expect from HPE storage technology. Customers can have confidence that business-critical data is safe because
HPE utilizes higher endurance DRAM and components that help verify data is moved to non-volatile technology in the event of a power loss.
Figure 1. HPE 8GB NVDIMM module
Technical white paper
Page 4
HPE Persistent Memory offers the following features (note that the performance statements included here are based upon HPE internal lab
testing that was separate from the Oracle database test results documented in the “Workload results” section of this paper):
Turbo-charged performance delivering up to 4x faster transaction performance
HPE ProLiant DL360 Gen9 and DL380 Gen9 servers equipped with HPE 8GB NVDIMM modules increase performance for write intensive
workloads delivering up to 2x+ faster database logging performance 1. The NVDIMM modules are designed to speed customer application
workloads, delivering up to 4x+ faster OLTP replication functions 2 enabling faster workloads.
Technology designed to make your business data resilient
The HPE Persistent Memory modules include a flash component plus an HPE Smart Storage Battery that provides you with a persistent storage
capability at memory speeds without the data volatility of memory. 3 Active data runs on the DRAM component of NVDIMM which not only
provides outstanding performance but also offers greater endurance than traditional storage media. 4
Solutions designed around your business workloads
HPE Persistent Memory is designed around industry applications and workloads to deliver the performance of memory with the persistence of
storage. A complete hardware and software ecosystem provides a comprehensive persistent memory solution for your business.
HPE Linux SDK for NVDIMM
The Linux Software Development Kit is a collection of kernel modules and OS components that can be installed over an existing OS installation
to provide base support for Type-N NVDIMM technology. It is meant to allow early adopters and application developers to have access to
NVDIMM technology and have an environment with which to experiment and do development. The Linux SDK for NVDIMM-N should only be
used on HPE hardware with Non-Volatile Memory (NVDIMM-N) DIMMs installed.
Note
The SDK is not for production use. Its support model is very limited and the software is being provided “as-is” and considered experimental.
Solution components
This white paper provides configuration and performance information for 8GB NVDIMMs in an Oracle 12c Database environment. The tests were
run on an HPE ProLiant DL380 Gen9 server running Red Hat® Enterprise Linux 7.2 plus the Linux 4.2 kernel available with the HPE Linux SDK
for NVDIMM. The HPE ProLiant DL380 Gen9 server was configured with the following components:
• Two 16-core Intel® Xeon® E5-2683 v4 processors at 2.10 GHz
• 256GB memory (8 x 32GB RDIMMs)
• 16 x HPE 8GB NVDIMM modules
• 24 x 400GB 12G SAS write-intensive SSD drives
• 16 x 450GB 15K RPM SAS drives (swapped out 16 SSDs for SAS testing)
1 Internal
HPE lab testing on an HPE ProLiant DL380 Gen9 E5 2600 v4 with HPE 8GB NVDIMM-N, December 2015.
Internal HPE lab testing on an HPE ProLiant DL380 Gen9 E5 2600 v4 with HPE 8GB NVDIMM-N, December 2015.
3 Based on the NVDIMM utilizing NAND Flash as a persistent store and the HPE Smart Storage Battery providing the backup power source to move data from DRAM to NAND Flash.
4 Endurance comparison based on comparing Program Erase Cycles of DRAM to Program Erase Cycles of NAND Flash. DRAM can have up to 10 trillion more program erase cycles
than NAND Flash.
2
Technical white paper
Page 5
Figure 2 shows the location of the NVDIMMs (outlined in red) and RDIMMS, along with the disk configuration of the HPE ProLiant DL380 Gen9
server.
Figure 2. Diagram of memory and disk layout for HPE ProLiant DL380 Gen9 server
Software
• Red Hat Enterprise Linux version 7.2 plus the Linux 4.2 kernel and packages for the NVM library (libpmem and libpmem-devel) provided with
the HPE Linux SDK for NVDIMM
• Oracle 12c R1 Enterprise Edition (12.1.0.2)
Best practices and configuration guidance for the Oracle Database solution with NVDIMM
HPE ProLiant DL380 Gen9 BIOS
• Hyper-Threading—Enabled
• Intel Turbo Boost—Enabled
• HPE Power Profile—Maximum Performance
NVDIMM configuration best practices
• The NVDIMMs must be configured according to the population rules outlined in the QuickSpecs and the HPE 8GB NVDIMM User Guide for
HPE ProLiant Gen9 Servers. See Appendix B for the configuration used for this testing.
• Balance the total memory capacity across all processors.
Technical white paper
Page 6
• Only RDIMMs can be mixed with NVDIMMs. No other memory types may be used when NVDIMMs are present.
• Interleave the NVDIMM modules (via BIOS settings) to create one block device on each socket. See the HPE 8GB NVDIMM User Guide for
HPE ProLiant Gen9 Servers for interleaving instructions.
Storage configuration best practices
• Database, tablespaces and indexes on SSDs configured in one RAID 5 LUN
• Place redo logs on their own RAID 1 LUNs on the fastest media available
Oracle configuration best practices
For specific Oracle database parameter settings, see Appendix C.
• Disable RHEL automatic NUMA balancing (see Appendix D).
• Disable automatic memory management if applicable.
• Set buffer cache memory size large enough per your implementation to avoid physical reads.
• Create two large redo log file spaces large enough to minimize log file switching and reduce log file waits. 5
• Create an undo tablespace of 300GB. 6
• Configure huge pages (see Appendix D) and set Oracle to only use huge pages.
Capacity and sizing
Workload description
The Oracle workload was tested using HammerDB, an open-source tool. The tool implements an OLTP-type workload (60 percent read and 40
percent write) with small I/O sizes of a random nature. The transaction results have been normalized and are used to compare test
configurations. Other metrics measured during the workload come from the operating system and/or standard Oracle Automatic Workload
Repository (AWR) statistics reports.
The OLTP test, performed on a 1TB database, was both highly CPU and moderately I/O intensive. The environment was tuned for maximum user
transactions. After the database was tuned, the transactions were recorded at different connection levels. Because customer workloads vary so
much in characteristics, the measurement was made with a focus on maximum transactions.
Oracle Enterprise Database version 12.1.0.2 was used in this test configuration.
The databases used several different Oracle Automatic Storage Management (ASM) disk groups with a combination of RAID 5 and multiple
RAID 1 LUNs. The single RAID 5 SSD LUN was used for tablespaces and indexes. Eight RAID 1 LUNs were used for the redo logs for the tests
with SAS drives and the tests with SSDs. One NVDIMM 64GB block device was used for each redo log for the persistent memory tests.
We used several different Oracle connection counts for our tests, and found the optimal number to be 125 connections for the NVDIMM
configuration.
It is important that the size and quantity of redo log file spaces be large enough so that a constraint is not encountered when closing one redo log and opening another, such that
the new redo log has not had enough time to have been flushed since it was last used.
6 This testing used an undo tablespace size of 300GB. The size should be sufficiently large enough that it doesn’t cause the queries to throw errors.
5
Technical white paper
Page 7
Workload results
Three configurations were tested with the Oracle redo logs placed on:
1. 15K RPM SAS drives
2. Write-intensive SSD drives
3. NVDIMM block devices
Figure 3 shows the results of the tests in terms of relative throughput (transactions per minute) achieved with each configuration. The number of
Oracle connections is the count of users that drove the test. There was no think time which means that each user represented hundreds of actual
users. The results have been normalized, with the throughput for 25 connections with SAS drives set to 100%. All other data points are relative to
the initial result.
The data shows that:
• The usage of NVDIMM devices for Oracle redo logs provided 2.3 to 4.6 times better throughput than 15K SAS drives.
• NVDIMMs also provided up to 15% better throughput than write-intensive SSD drives.
Relative Throughput
Oracle DB throughput
SAS vs SSD vs NVDIMM
1200
1000
800
600
400
200
0
25
50
75
100
Number of Oracle connections
SAS
SSD
NVDIMM
Figure 3. Oracle Database throughput for various disk configurations
125
Technical white paper
Page 8
The configuration with SAS drives was limited by the log file sync time, and CPU utilization was under 40%. With SSDs, the log file sync time was
reduced significantly (by almost a factor of four), but still limited performance at the higher connection counts. With NVDIMMs, the I/O
bottleneck was removed and processing power was the limiting factor, with CPU utilization reaching 95%. Figure 4 shows the log file sync time for
125 connections, with NVDIMMs yielding a factor of seven times improvement over SAS drives and a factor of two times over SSDs. It also shows
how much CPU utilization improves by removing the I/O bottleneck for the redo logs.
Average log file sync time
vs CPU utilization
7
100
6.93
80
6
5
60
4
40
3
1.82
2
0.94
1
0
20
0
SAS
SSD
avg sync time
Figure 4. Average log file sync times
NVDIMM
CPU util
CPU utilization
avg sync time (ms)
8
Technical white paper
Page 9
The substantial throughput gains achieved by placing the redo logs on NVDIMMs also allows reducing the number of cores required. Tests run
with half of the cores in the server disabled 7 demonstrated that an NVDIMM configuration can outperform a SAS configuration with the original
number of cores. Figure 5 shows the throughput achieved with an NVDIMM configuration with only 16 cores (8 cores per processor) as
compared to a SAS configuration with 32 cores. Note that because the NVDIMM configuration is CPU bound, reducing the number of cores
means that performance peaks at a lower number of Oracle connections, but the throughput at 75 connections is about double that of the SAS
drive configuration. This means that usage of NVDIMMs allows Oracle licensing costs to be cut in half while achieving higher throughput than the
configuration with 15K RPM SAS drives.
600
100
500
80
400
60
300
40
200
20
100
0
0
25
50
75
100
CPU utilization
Relative throughput
Oracle DB throughput
SAS with 32 cores vs NVDIMM with 16 cores
125
Number of Oracle connections
SAS 32C
NVDIMM 16C
SAS CPU util
NVDIMM CPU util
NVDIMM with 16
cores performed
better than SAS
drives with 32
cores, reducing
Oracle licenses
by 50%
Figure 5. Throughput for SAS configuration with 32 cores versus NVDIMM configuration with 16 cores
In addition, in order to protect against the loss of a redo log on an NVDIMM module, the redo logs may be configured in redundant groups. With
sixteen 8GB NVDIMMs, four 32GB redo logs may be configured, with two logs on each of the two /dev/pmem devices. The two redo logs that are
placed in the same group should be on separate /dev/pmem devices. See Appendix F for the configuration details. Test results for 125 Oracle
connections with the redundant redo logs lowered Oracle throughput (TPM) by only 1% as compared to using two redo logs with no redundancy.
Finally, tests were conducted with 8GB redundant redo logs on four 8GB NVDIMMs to determine the impact of using fewer NVDIMMs. Oracle
throughput was within 3% of the results with 16 NVDIMMs, but as expected, the log file switch frequency was higher (every 36 seconds as
compared to two minutes for 32GB logs and four minutes for 64GB logs). The more frequent log file switches caused throughput to be more
variable across the run, and also caused more variation in CPU utilization. So it is possible to achieve similar throughput with fewer NVDIMMs, but
performance may be less predictable.
Analysis and recommendations
The testing has demonstrated the performance benefits of configuring NVDIMM modules for an Oracle Database environment. NVDIMMs can be
easily configured to contain Oracle redo log files with up to 64GB of capacity for each persistent memory device on a per socket basis. The
following advantages can be achieved with this solution:
Removes the bottleneck of redo log write times.
Improves throughput and resource utilization: Higher transactions per minute and higher CPU utilization were achieved when using
NVDIMMs as compared to SAS drives and SSDs.
Lessens the impact of log file switches: Even though log file switches occurred multiple times during the tests, there was no impact on
performance due to the fast write time to the NVDIMM devices.
7
Note that these tests were run on the same server as the original tests, with half of the cores disabled via a BIOS setting.
Technical white paper
Page 10
Reduces Oracle licensing costs: With NVDIMMs, fewer cores are required to achieve similar or better throughput.
Cost effective solution as compared to SAS drives and SSDs: With a price of $899 per NVDIMM module 8, they can be a cost effective means
of achieving better performance. For this testing, replacing 15K RPM SAS drives with the same number of NVDIMMs improved performance by a
factor of two to four times with only a 40% increase in price ($637 per SAS drive) 9. Similarly, using NVDIMMs was three times more cost effective
than an equivalent number of write-intensive SSD drives with a list price of $3050 per drive (sixteen NVDIMMs cost $14,384 versus $48,800 for
sixteen write-intensive SSDs). 10
Use redundant redo logs to protect against data loss: Create two redo logs on separate persistent memory devices and place them in the
same redo log group to provide protection against the loss of a single NVDIMM.
Summary
The availability of HPE 8GB NVDIMM modules significantly improves Oracle Database performance by combining the speed of memory access
time with the data persistence of storage. The benefits include improved throughput, increased resource utilization, and reduced Oracle licensing
costs. In addition, the cost of HPE 8GB NVDIMM modules make them an attractive solution as compared to SSDs, as well as providing two to four
times better throughput as similarly priced 15K RPM SAS drives.
Implementing a proof-of-concept
As a matter of best practice for all deployments, HPE recommends implementing a proof-of-concept using a test environment that matches as
closely as possible the planned production environment. In this way, appropriate performance and scalability characterizations can be obtained.
For help with a proof-of-concept, contact an HPE Services representative (hpe.com/us/en/services/consulting.html) or your HPE partner.
Price as of May 26, 2016 for HPE 8GB NVDIMM http://www8.hp.com/us/en/products/server-memory/product-detail.html?oid=1008830324
Price as of May 26, 2016 for HPE 450GB 12G SAS 15K RPM SFF drive http://www8.hp.com/us/en/products/oas/product-detail.html?oid=6826121
10 Price as of May 26, 2016 for HPE 400GB 12G SAS write intensive SFF Solid State Drive http://www8.hp.com/us/en/products/oas/product-detail.html?oid=7605795
8
9
Technical white paper
Page 11
Appendix A: Bill of materials
Note
Part numbers are at time of testing and subject to change. The bill of materials does not include complete support options or rack and power
requirements. If you have questions regarding ordering, please consult with your HPE Reseller or HPE Sales Representative for more details.
hpe.com/us/en/services/consulting.html
Table 1. Bill of materials
QTY
PART NUMBER
DESCRIPTION
HPE ProLiant DL380 Gen9 server
1
767032-B21
HPE DL380 Gen9 24SFF CTO Server
1
817953-L21
HPE DL380 Gen9 E5-2683v4 FIO Kit
1
817953-B21
HPE DL380 Gen9 E5-2683v4 Kit
8
728629-B21
HPE 32GB 2Rx4 PC4-2133P-R Kit
16
782692-B21
HPE 8GB NVDIMM Single Rank x4 DDR4-2133 Module
24
802582-B21
HPE 400GB 12G SAS WI 2.5in SC SSD
16
759210-B21
HPE 450GB 12G SAS 15K RPM SFF Hard Drive 11
1
761872-B21
HPE Smart Array P440/4G FIO Controller
1
727250-B21
HPE 12Gb DL380 Gen9 SAS Expander Card
1
779799-B21
HPE Ethernet 10Gb 2P 546FLR-SFP+ Adptr
1
733660-B21
HPE 2U SFF Easy Install Rail Kit
2
720478-B21
HPE 500W FS Plat Ht Plg Pwr Supply Kit
11
Sixteen SSDs were removed in order to use the SAS hard drives
Technical white paper
Page 12
Appendix B: Memory configuration
Table 2 shows the memory configuration for the HPE ProLiant DL380 Gen9 server, including which slots were populated with NVDIMMs and
which ones had RDIMMs. For the memory population rules, see the QuickSpecs and the HPE 8GB NVDIMM User Guide for HPE ProLiant Gen9
Servers.
Table 2. Memory configuration
Appendix C: Oracle configuration parameters
open_cursors=3000
pga_aggregate_target=51546M
processes=3000
result_cache_max_size=794304K
sga_target=155136M
_high_priority_processes='VKTM*|LG*'
lock_sga=TRUE
use_large_pages='ONLY'
_max_outstanding_log_writes=4
Appendix D: Linux kernel configuration
kernel.sem = 250 32000 100 128
kernel.shmall = 4294967295
kernel.shmmax = 332859965440
fs.file-max = 6815744
kernel.shmmni = 4096
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048586
vm.nr_hugepages = 77572
vm.hugetlb_shm_group = 1002
kernel.numa_balancing=0
Technical white paper
Page 13
Notes
1.
The vm.nr_hugepages is dependent on the amount of memory installed in the server. We had 256GB in the HPE ProLiant DL380 Gen9, so
we set it to 77572.
2.
For RHEL 7, automatic NUMA balancing should be disabled (by setting kernel.numa_balancing=0) for optimal Oracle performance. This
resulted in a significant performance improvement for our testing.
Appendix E: udev rules
A udev rules file /etc/udev/rules.d/99-oracleasm.rules was created to set the required ownership of the Oracle ASM LUNs:
ACTION=="add|change", KERNEL=="sdb",OWNER="oracle",GROUP="dba",MODE="0660"
ACTION=="add|change", KERNEL=="sdc",OWNER="oracle",GROUP="dba",MODE="0660"
ACTION=="add|change", KERNEL=="sdd",OWNER="oracle",GROUP="dba",MODE="0660"
ACTION=="add|change", KERNEL=="sde",OWNER="oracle",GROUP="dba",MODE="0660"
ACTION=="add|change", KERNEL=="sdf",OWNER="oracle",GROUP="dba",MODE="0660"
ACTION=="add|change", KERNEL=="sdg",OWNER="oracle",GROUP="dba",MODE="0660"
ACTION=="add|change", KERNEL=="sdh",OWNER="oracle",GROUP="dba",MODE="0660"
ACTION=="add|change", KERNEL=="sdi",OWNER="oracle",GROUP="dba",MODE="0660"
ACTION=="add|change", KERNEL=="sdj",OWNER="oracle",GROUP="dba",MODE="0660"
ENV{DM_NAME}=="data-lvol0", OWNER:="oracle", GROUP:="oinstall", MODE:="660"
ENV{DM_NAME}=="redo-lvol0", OWNER:="oracle", GROUP:="oinstall", MODE:="660"
ENV{DM_NAME}=="redo--pmem0-lvol0", OWNER:="oracle", GROUP:="oinstall", MODE:="660"
ENV{DM_NAME}=="redo--pmem1-lvol0", OWNER:="oracle", GROUP:="oinstall", MODE:="660"
Appendix F: Create redundant redo logs
The commands used to create two redo log groups, with two 32GB redo logs on each persistent memory device, are shown below. Note that
within one redo log group, there are two redo log files, with each one residing on a separate ASM volume on different persistent memory devices
(REDO_PMEM0 and REDO_PMEM1).
SQL> alter database add logfile group 20 ('+REDO_PMEM0','+REDO_PMEM1') size 31950M;
Database altered.
SQL> alter database add logfile group 21 ('+REDO_PMEM0','+REDO_PMEM1') size 31950M;
Database altered.
SQL> select group#,member from v$logfile;
20
+REDO_PMEM0/ORADL380/ONLINELOG/group_20.256.915105841
20
+REDO_PMEM1/ORADL380/ONLINELOG/group_20.256.916653663
21
+REDO_PMEM0/ORADL380/ONLINELOG/group_21.257.916653823
21
+REDO_PMEM1/ORADL380/ONLINELOG/group_21.257.915105921
Technical white paper
Page 14
Resources and additional links
HPE & Oracle Alliance
http://h22168.www2.hpe.com/us/en/partners/oracle/
HPE & Oracle Alliance Technical page
http://h17007.www1.hpe.com/us/en/enterprise/converged-infrastructure/info-library/index.aspx?app=oracle
HPE ProLiant DL380 Gen9 server
hpe.com/servers/dl380
HPE Persistent Memory
hpe.com/servers/persistentmemory
HPE Linux SDK for NVDIMM
http://linux.hpe.com/nvdimm/
HPE 8GB NVDIMM User Guide for HPE ProLiant Gen9 Servers
hpe.com/info/NVDIMM-docs
HPE Reference Architectures
hpe.com/info/ra
HPE Technology Consulting Services
hpe.com/us/en/services/consulting.html
To help us improve our documents, please provide feedback at hpe.com/contact/feedback.
Sign up for updates
Rate this document
© Copyright 2016 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change without notice.
The only warranties for HPE products and services are set forth in the express warranty statements accompanying such products and
services. Nothing herein should be construed as constituting an additional warranty. HPE shall not be liable for technical or editorial errors
or omissions contained herein.
Oracle is a registered trademark of Oracle and/or its affiliates. Intel and Xeon are trademarks of Intel Corporation in the U.S. and other
countries. Red Hat is a registered trademark of Red Hat, Inc. in the United States and other countries. Linux is the registered trademark of
Linus Torvalds in the U.S. and other countries.
4AA6-6008ENW, August 2016, Rev. 1
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertising