Oracle on LinuxONE

Oracle on LinuxONE


Chapter 4.

Successful consolidation project: From sizing to migration

This chapter focuses on the practical aspects of consolidation and guides you through the steps to achieve a successful IT optimization operation on LinuxONE with your Oracle databases.

This chapter includes the following topics:


4.1, “Delimitation of the project scope” on page 44


4.2, “Sizing” on page 45


4.3, “Proof of concept” on page 49


4.4, “Migration” on page 53


4.5, “Successful production” on page 61

© Copyright IBM Corp. 2017. All rights reserved.


4.1 Delimitation of the project scope

The first step of the consolidation project process is to delimitate the scope of consolidation: we need to choose which servers will be consolidated. Not all the servers are good candidates for consolidation. To select the right servers, consider the issues that are described next.

4.1.1 Certification and support

The middleware and software must be certified on the IBM LinuxONE Operating System we plan to use. For Oracle database, the support can be checked on the My Oracle Support website (

). This website contains the latest certification information.

Note: Oracle does not certify their software to specific hardware. Oracle certifies their products to operating systems (OSs) and requires the hardware vendor support the wanted OS. In our case, we must check support for Oracle database on IBM: Linux on

System z SLES (and the version) or IBM: Linux on System z Red Hat (and the version).

If the database is part of a specific application, we also must ensure that this application is certified (at least in split tier) on IBM LinuxONE. For example, Oracle e-Business Suite,

Oracle Siebel, Oracle BIEE and Hyperion are certified in split tier (the database part) and can run on LinuxONE.

4.1.2 Best candidates for consolidation

The best candidates to be consolidated on IBM LinuxONE are low-end, under-utilized, and non-virtualized servers. With these kind of servers, you can reach an interesting consolidation ratio (up to 1:10 and above) and this ratio leads to a better business case. Best candidates are also servers with complementary peaks (peaks of activity spread across large windows).

However, with high-end virtualized servers (such as IBM Power servers), the business case might not be interesting enough to bring significant savings and to cover migration costs.

Servers with a large quantity of concurrent peaks do not receive the best of consolidation capabilities of IBM LinuxONE.

4.1.3 Non-functional requirements

Architecture design must take into account and prioritize the following main non-functional requirements, depending on business requirements:

򐂰 Availability

򐂰 Reliability

򐂰 Scalability

򐂰 Security

򐂰 Performance

򐂰 System management


Oracle on LinuxONE

4.1.4 Business value analysis

A business value analysis is recommended to take into account the total cost of acquisition.

The study includes the total cost of ownership for hardware, software, maintenance, administration, floor space, power, cooling, and so on. The results of this study demonstrate the financial advantages of running Oracle on LinuxONE. This study is done by specific teams within IBM. For more information about conducting this study, contact your IBM representative.

4.2 Sizing

Sizing is key for a successful consolidation project. This exercise determines, based on the existing environment, the quantity of resources (such as CPU and memory) that are needed to run Oracle databases in the new IBM LinuxONE environment.

In this section, we describe the various facets that should be examined when your Oracle databases are migrated to the LinuxONE environment.

4.2.1 CPU

The number of needed LinuxONE processors can be estimated with the help of IBM by using

IBM SURF and IBM SCON tools. For this evaluation, you need the following information:

򐂰 Details about the source servers (vendor or manufacturer, server and type model, number of chips and cores, and model and speed of chips).


The average and peak CPU usage (estimation).

򐂰 The type of workload (for example, production database).

This information provides you with a rough estimation of needed CPU. If you want to get a more accurate sizing, you can collect the real performance data (SAR, NMON, Perform for

Windows, or an equivalent product) collected for each server at the same time during 24 - 48 hours with an interval of 5 - 10 minutes (to ensure we take into account activity peaks).

Note: It important to collect this data during the peak of activity (for example, end of month batch, end of year marketing campaign) to ensure that we measure the peak of workload

activity. An example of CPU sizing is shown in Figure 4-1 on page 46.

Chapter 4. Successful consolidation project: From sizing to migration


Figure 4-1 CPU sizing example

For more information about this CPU estimation, contact your IBM representative or IBM


4.2.2 Memory

On LinuxONE, it is recommended to carefully evaluate the memory required for the Linux guest in a virtualized environment. Oversizing memory is wasteful and may creating additional overhead.

It is not unusual in a distributed environment to allocate more memory than what is needed by the Oracle database workloads. You must ensure that the source database memory is optimized. You might want to use the SGA target advisory and PGA target advisory sections of the Oracle AWR reports to estimate the memory that is used by the Oracle database.

After verifying that the source system memory is optimized, the suggestion is to use the same amount of memory on the target database as on the source database. To get a representative

Oracle AWR report, you must take your snapshots during the peak period of your workload.

You must determine the peak period within the most loaded day of the week (or month at certain periods). You can run an ADDM report from your Oracle Enterprise Manager Cloud


You can find the memory that is used by Oracle in AWR report in the Memory statistics

section, as shown in Figure 4-2.

Figure 4-2 Memory statistics in the Oracle AWR report


Oracle on LinuxONE

You can find the quantity of memory that is allocated to the Oracle database in AWR report in

init.ora section, as shown in Figure 4-3.

Figure 4-3 Quantity of memory allocated

You can check the AWR reports advisory section to ensure SGA and PGA are optimized. For

SGA, you find the information in the AWR SGA Target Advisory section, as shown in

Figure 4-4.

Figure 4-4 SGA optimization

For more information about PGA, see the AWR PGA Target Advisory section, as shown in

Figure 4-5.

Figure 4-5 PGA optimization

Chapter 4. Successful consolidation project: From sizing to migration


For the dedicated server processes, you can calculate the memory that is needed, as shown

in the Example 4-1.

Example 4-1 Calculating needed memory

Memory needed for dedicated server processes = Max(logons current in AWR) X memory used per thread

(as a rule of thumb, dedicated connections use 4.5 MB per connection, this number is application dependent and can vary a lot).

You can find the concurrent logons in the Instance Activity Stats - Absolute Values section of

the AWR report, as shown in Figure 4-6.

Figure 4-6 Concurrent logons

4.2.3 I/O

You can find the I/O information in AWR reports in the load profile section, as shown in

Figure 4-7. Physical reads and Physical writes values help you to size the I/O for this

workload on IBM LinuxONE.

Figure 4-7 I/O information


Oracle on LinuxONE

4.3 Proof of concept

Before migrating the production on IBM LinuxONE, it is strongly recommended that a Proof of

Concept (PoC) is run. This PoC allows you to validate the architecture from a functional point of view. If a performance test is done, the PoC confirms the sizing.

4.3.1 PoC qualification

Qualification is key for a successful and useful PoC. During this phase, we must determine the objectives, delimitate the scope of the PoC, set the Key Performance Indicators, design the architecture, define the type of workload we are going to evaluate, and define the test plan.


The first question to answer when talking about a PoC is: What are my objectives? This question helps to determine the objectives from a business point of view that are translated into a technical point of view. The following objectives can be included:

򐂰 Validate Oracle on IBM LinuxONE delivers sufficient throughput to face the 200 business growth.

򐂰 Check that the response times to ensure the SLA meets user requirements.

򐂰 Ensure Oracle on IBM LinuxONE runs the same as it did on another platform in terms of functionalities.

The objectives orient the choice of functionality and type of workload to test, and the architecture to put in place for the test.


During a PoC, we do not test all of the functionalities or the full workload. Instead, we focus on a subset of functionalities and a representative workload.

The scope can include the following tasks:

򐂰 Test the duration of the end of year batch (critical for my company) on a subset of accounts.

򐂰 Evaluate the throughput for a specific type of transaction for 1000 concurrent users.


Test the high availability mechanism if Operating System maintenance is needed.

򐂰 Test the full disaster recovery solution if a site is lost.

Key performance indicators

Asking the question “I want to check that the Oracle on LinuxONE environment is faster than my current environment” is not enough. Measurable metrics are needed to evaluate the success of the PoC. Collect performance measurements on the current environment.

The following metrics can be included:

򐂰 Response time for specific OLTP/Datawarehouse queries

򐂰 Throughput in terms of transactions per minute

򐂰 Time to recover after a failure

򐂰 Number of reports that is generated for one hour

򐂰 Duration of an export/import operation for a migration

Chapter 4. Successful consolidation project: From sizing to migration



After the objectives, scope, and KPIs are set, we can design the architecture of the PoC. We recommend that this architecture is the closest possible than the final architecture, especially for Operating System version and database version. For performance measurements, dedicate the resources to the LPAR.

This architecture encompasses the following elements:

򐂰 Hardware part, including type of LinuxONE model, storage subsystem, and network

򐂰 Virtualization, if used

򐂰 Operating system

򐂰 Software, including database part and application server part

Type of workload

We strongly recommend the use of a real representative workload. For this workload, IBM

Infosphere Optim™ Capture and Replay or Oracle Real Application Testing can be useful for

OLTP workload. We do not recommend the use of generic workload generators, such as

Swingbench or Hammer DB, because they produce generic results. Therefore, with these kind of tools, we do not handle the potential internal contentions so we cannot guarantee the scaling.

If you cannot use a subset of real workload (for example, if for regulation reasons you cannot bring the data, even anonymized, in a Benchmark Center to run your PoC), we suggest the use of benchmark tools, such as IBM Rational® Performance Testing.

If you plan to use real client data, you must decide the appropriate way to access the data.

For example, for a subset of production data, export/import is the simplest way to migrate the data onto LinuxONE.

Test plan

With all the stakeholders of the project, you can work on the test plan, including validating the high-level time schedule and resources (IT and people) that are required for this PoC.

The high-level time schedule takes into account the following tasks:

򐂰 Hardware and software setup

򐂰 Execution of the various test cases, including tuning

򐂰 PoC report redaction

The availability of the resources during the PoC (especially people) is a key factor for success. For example, avoid holiday periods to run your PoC because you can be stuck in the middle of the project because of the lack of a critical skill.

4.3.2 Execution

When the qualification and preparation is complete, execution is the next step. This process includes the following tasks:


Hardware setup


Software setup


Writing a test procedure






Oracle on LinuxONE

Hardware setup

This phase includes the following tasks:

򐂰 Define the I/O connectivity

򐂰 Define the LPAR

If performance is part of the PoC, it is recommend that the test environment is isolated in an LPAR and dedicate the resources CPU, RAM, IOs, and network.

򐂰 If virtualization is a part of the PoC, define the IBM z/VM® installation and Linux VM.


Install the operating system

Check that all of the packages that are needed for Oracle are available with RPM checker.

Also, check the kernel parameters and change the I/O scheduler, if needed. Change user limits.

򐂰 Validate the storage validation

After the SAN is configured, it is recommend that the Orion disk I/O utility is run before any database is created to ensure that the I/O subsystem delivers as expected.

򐂰 Enable multipathing for SCSI/FCP or Hyperpav for ECKD/IBM FICON® devices.

򐂰 Validate network latency validation by using simple commands, such as





Software setup

Once the Oracle database is installed, we move on to installation of the application. If the architecture was designed in split-tier, the application is installed on other servers than


Remember to install the latest level of patches to the database and to change the parameters, especially the filesystemio_options parameter. If file systems for Oracle database files are used, we recommend setting the Oracle parameter filesystemio_options='setall' , to enable both asynchronous and direct I/O for Oracle database files.


Before starting the testing, the following monitoring tools must be installed:

򐂰 PerfToolkit and IBM Wave for z/VM

򐂰 Storage monitoring tool


NMON or any other Linux monitoring tool, such as SAR, VMSTAT, and IOSTAT

򐂰 Enterprise Manager Cloud Control (if you do not have this tool, Enterprise Express Cloud

Control provides high-level monitoring) to monitor the Oracle database


AWR Oracle statistics and alert log

The test procedure includes monitoring.

Writing a test procedure

A test procedure allows you to have automated and repeatable test cases.

You adapt the test procedure depending on what you want to test, based on the key

performance indicators that are described in “Key performance indicators” on page 49. This

test procedure can include the following tasks:


Restore database, especially after a parameter change.

򐂰 Check DB size to ensure the restore process completed successfully.

Chapter 4. Successful consolidation project: From sizing to migration


򐂰 Set up the changes at the infrastructure level (various definitions of CPU, memory, and so on). Also, check CPU and memory setup in cat /proc/cpuinfo or meminfo .

򐂰 Check Oracle database parameters and adjust them, if needed (for example, number of processes).


Gather system and schema statistics.

򐂰 Start Linux monitoring.

򐂰 Start AWR sampling.

򐂰 Launch the test:

– Run test twice: 1 cold test and 1 warm test

– Gather schema statistics between both (to assess the data growth)

򐂰 Collect statistics and results for analysis.


Gather data in an independent manner.


After each test case, you likely must tune the environment (infrastructure, including the operating system, Oracle database, and application). Only one change should be made at a time. Otherwise, you cannot evaluate which parameter brought the change. Keep a log book of each test case change and the results.

For more information about tuning advice, see Chapter 5, “Performance management for IBM z/VM, Linux, and Oracle on IBM LinuxONE” on page 63.

4.3.3 Analyzing and reporting

In this section, we describe data collection and communicating the results.

Data collection

Each run generates a set of data that you must collect and gather in an independent manner.

A recommendation is to keep a trace of all the tests in a table. The table can include the following information:

򐂰 Number of the test and run

򐂰 Hardware configuration (CPU, memory, and so on)

򐂰 Parameters that are involved



– Transactions per minute

– Duration of the run

– Number of reports generated

For each test, you have an associated folder that contains the monitoring data and a proof of results (for example, a screen shot of the duration of the run).

With this classification methodology, your results are not contested. Also, you can quickly spot the “golden run”, which is the test for which the best results were found, with the appropriated parameters.


Oracle on LinuxONE

Communicating the results

At the end of the PoC, a report is created to share the result with all the stakeholders.

It is suggested that the following information is included in this report:

򐂰 An executive summary that includes why this PoC was run and its main objectives

򐂰 Architecture design

򐂰 Testing methodology

򐂰 Test results summary

򐂰 Conclusion

򐂰 Appendix that includes monitoring data for the main tests

At the end of the presentation, the stakeholders will be in a position to decide (from a technical point of view based on the PoC results) the next steps for consolidation project on


4.4 Migration

After a decision is made to run your Oracle database environment on LinuxONE, it is time for the migration operation. There are several ways to migrate an Oracle database on LinuxONE.

This migration, which is often a standard operation for most of the DBAs, can be risky if it is under evaluated.

Note: This section is based on our experiences and does


replace any IBM or Oracle documents.

For more information about migration, see the following IBM Redbooks publication:


Practical Migration from x86 to LinuxONE

, SG24-8377

4.4.1 Considerations before migration

In this section, we describe the following points that must be considered when an Oracle database is migrated to LinuxONE:

򐂰 Downtime

򐂰 Technical compatibility

򐂰 Application compatibility

򐂰 In-house administration scripts

򐂰 Disk space requirements

򐂰 Skills


Each migration leads to a database downtime. Depending on the technique, this downtime can be a minimum of a few minutes to more than a day. For critical applications that must be always available, downtime is the main criteria for clients to choose the appropriate technique.

Chapter 4. Successful consolidation project: From sizing to migration


Technical compatibility

Depending on the technique you choose, you might encounter some of the following technical limitations:

򐂰 Endianness

Endianness describes how the bits are organized as seen from the outside. Depending on the platform, this endianness can be “Little Endian” or “Big Endian”. Big-endian means that numbers are stored with the most significant data earlier in memory. Conversely little-endian means that numbers are stored with the least significant data earlier in memory. IBM LinuxONE is “Big Endian”, where the most significant byte is put first.

Some cross platform migration methods, such as Transportable Databases, require the same endianness.

Also, some migration methods, such as Transportable Tablespaces, must convert the bits if the Endianness is different. This operation takes time and might lead to another migration technique being used.

Use the following command to check the Endianness for your platform: select platform_id, platform_name from v$database;

򐂰 Objects

Some objects cannot be migrated with certain techniques, including the following examples:

– Export/Import Data Pump utilities cannot be used for XML types

– Streams cannot handle secure files, Character Large Object(CLOB), National

Character Large Object (NCLOBs), Binary Large Object (BLOB) and other types

Application compatibility

Although Oracle database is supported on LinuxONE, you must check the supported configuration when this database is embedded into an application.

For example, Oracle database as part of Oracle E-Business Suite is supported on LinuxONE, whereas Oracle database as part of Oracle Financial Services (formerly I-FLEX) is not supported.

Tip: Always check the supported configuration on My Oracle Support to avoid the support issues.

In-house administration scripts

Be aware that you must customize the administration and automation scripts (for example, for backup and recovery operations) in your new environment. These scripts must be tested before the migration.


If the migration technique uses the network (for example, replication techniques), you must ensure that the network is efficient in terms of bandwidth and latency; otherwise, this potential bottleneck dramatically increases the migration operation duration. The chosen technique must also consider the location of the servers, target, and source. Constraints can include that these factors are geographically dispersed or they cannot communicate with each other.


Oracle on LinuxONE

Disk space requirements

Some migration techniques need the disk space for staging or dump files. You should consider this technique if you have storage constraints, as shown in the following examples:

򐂰 If the migration is done by using Export/Import, you need some space on the source system to store the dump files.

򐂰 If the migration is done by using Recovery Manager (RMAN), you need some space to store the redo logs that are created after the copy starts.


Database migration can be considered a risky operation. Depending on the products and techniques that are used in your environment, you might prefer one technique over another.

Tip: Whenever possible, perform the migration with known products to mitigate the risks.

4.4.2 Available techniques for cross-platform migration

The following main cross-platform migration techniques are available:


Export/Import with Data Pump


Transportable Tablespaces


Transportable database (if same Endianness)


Create Table As Select


IBM Infosphere Data Replication


Oracle Streams


Oracle DataGuard


Oracle GoldenGate

In this section, we describe the two most-utilized cross-platform migration techniques:

Export/Import with Data Pump and Transportable Tablespaces. Then, we provide a brief overview of the other techniques.

Export/Import with Data Pump

Export/Import with Data Pump is the classic method that is used to migrate a database. The

Export/Import Data Pump utility is more efficient than the standard Export/Import utility. When possible, we recommend the use of Data Pump. This section describes only the Oracle Data

Pump utility.

Export and Import utilities transfer data objects between two databases, independently from hardware and software configurations. Objects can be tables, indexes, comments, grants, and so on. With Export, objects are extracted (tables first, then other objects, if any) and the extracted data is written into an Export dump file (Oracle binary-format). The Import utility takes the tables data and definitions from the dump file.

Migration technique

The migration technique includes the following overall steps:

1. Export the database by using the Export utility from the source (the dump file can be on a disk or on a tape).

2. Transfer the dump file to the target by using FTP, SFTP, RCP, or physically if there is no communication between the source and target servers (for example, dump files on tape).

3. Create the target database.

4. Import the data with Import utility in the new database.

Chapter 4. Successful consolidation project: From sizing to migration


5. Import the metadata to complete the full database structure.

6. Check the consistency.


This technique offers the following advantages:

򐂰 Can be used across any platform

򐂰 No conversion is needed

򐂰 Parallelism helps speed up the operation

򐂰 Fine-grained object selection for inclusion or exclusion

򐂰 Ability to restart without loss of data

򐂰 Database can be migrated and upgraded in one operation

򐂰 Space estimation from a storage point of view


This technique has the following limitations:

򐂰 Dump files that are generated by the Data Pump Export utility are not compatible with dump files that are generated by the original Export utility.

򐂰 Downtime can be significant for large databases.

Transportable Tablespace migration

The Transportable Tablespace migration works within the framework of Data Pump and

RMAN. RMAN is a backup and recovery manager that is provided by Oracle and does not require any separate installation. RMAN stores metadata in the control files of the target database and, optionally, in a recovery catalog schema in an Oracle database. RMAN provides block-level corruption detection during backup and restore.

Main migration steps

This technique includes the following overall steps:

1. Convert tablespaces into read-only.

2. Use Data Pump to move the metadata of the objects.

3. Convert the tablespaces to the correct Endianness (if necessary) with RMAN Convert.

4. Create the database on the target system.

5. Transfer the data files to the target server.

6. Import the tablespaces.

7. Import the metadata.

8. Check the consistency.


The main advantage in using this technique is that it can be used across different

Endianness. If Endianness are the same, we can use the Transportable Database feature.


This technique includes the following limitations:

򐂰 Requires a larger time investment to test the migration and to develop methods of validating the database and application. Consider whether the extra testing time, complexity, and risk that is involved are worth the potential to reduce migration downtime.


Requires a higher level of skills for the database administrator and application administrator that is compared to the use of Data Pump full database Export and Import.


Oracle on LinuxONE

򐂰 Does not transport objects in the SYSTEM tablespace or objects that are owned by special Oracle users, such as, SYS or SYSTEM. Applications that store some objects in the SYSTEM tablespace or create objects as SYS or SYSTEM require more steps and increase the complexity of the platform migration.

򐂰 Self-contained Oracle TableSpaces can be moved between platforms only.


If the destination database contains a tablespace with the same name, you must rename or drop it.

򐂰 Triggers, packages and procedures must be re-created on the target database.

򐂰 Only user tablespaces can be transported. System and SYSAUX objects must be created at the target.

򐂰 Tablespaces must be self-contained. (Materialized views or contained objects, such as, partitioned tables, are not transportable unless all of the underlying or contained objects are in the tablespace set.)

򐂰 The source and target databases must have the same character set.

򐂰 All system privileges are not imported into the upgraded database.


Resetting sequences and recompiling invalid objects might be needed.

򐂰 The Transportable Tablespaces migration approach does not allow for the redesign of the database (logical and physical layout) as part of the migration.

򐂰 Fragmented data still exists.

Create Table As Select instruction

With this instruction, you can copy data from a source database to a target database by using the network because of a database link. You do not need extra space for dump files because you copy directly from the source to the target. However, as you use the network, the network traffic can be significant and slow down other operations, depending on the size of the tables.

This technique can be used to migrate one or several tables, but we cannot envisage to use it for an entire database. This technique also can be used with other techniques (Export/Import or Transportable Tablespaces, for example).

IBM Infosphere Data Replication platform

IBM InfoSphere® Data Replication is a data replication platform that is easy to use, highly scalable, and enterprise ready. It can provide trusted data synchronization (including log-based change data capture capabilities) to replicate information between heterogeneous data stores in near realtime. The software provides replication with transactional integrity to support big data integration, continuous availability, consolidation, warehousing, and business analytics initiatives. It also supports zero-downtime migrations and upgrades.

InfoSphere Data Replication provides the following benefits:

򐂰 Faster, simpler data replication that maintains transactional integrity and consistency for enterprise data volumes.

򐂰 Centralized, easier-to-use platform that helps simplify deployment and data integration processes.

򐂰 Support for heterogeneous data that moves information, at a lower cost, between a wide range of systems and data sources.

This product requires extra licences.

Chapter 4. Successful consolidation project: From sizing to migration


For more information, see the IBM InfoSphere Data Replication website .

Oracle Streams

Oracle Streams use log data that is captured with LogMiner-based technology on the source system as its capture mechanism, from which logical change records (LCR) are generated. A stream allows transactions to be propagated to one or several databases. Oracle Streams can specify rules at multiple levels of granularity: database, schema, and table. Oracle

Streams capture changes from the redologs in a source database, the changes are staged, and then are propagated into the target database.

This product is used to propagate information among distributed databases, but the mechanism can be used for migration as well with other techniques (Export/Import, for example).

The main advantage of using Oracle Streams is that it provides minimal downtime. Minimal downtime (reconnecting the users only) and failback are possible because the source is untouched.

To use this technique, set up is significant, and some data types are not supported for capture processes. Therefore, an Export/Import of these object types also is required; for example,

SecureFile CLOB, NCLOB, BLOB, BFILE, Rowid, User-defined types (including object types,

REFs, arrays, and nested tables), XMLType are stored object relationally or as binary XML.

Oracle GoldenGate

By using Oracle GoldenGate, you can move data between like-to-like and heterogeneous systems, including different versions of Oracle Database, different hardware platforms, and between Oracle and non Oracle databases. The software performs real-time, log-based change data capture (CDC) and can move large volumes of transactional data between heterogeneous databases with low latency and minimal footprint.

This technique allows near zero downtime and works across platforms without conversion.

However, there are associated extra license costs. Also, you must consider memory and CPU overhead (3% - 5% CPU effect of Oracle GoldenGate Replication on the source system, depending on the number of redo logs that are generated).

Some data types, such as ANYDATA, BFILE, and TIMEZONE_REGION, are not supported.

Oracle Dataguard Heterogeneous Primary and Physical Standbys

Data Guard depends on the Log Writer process or the Archive process to capture and send redo data or logs to the standby site. This technique can be efficient for other platforms, but there are strong limitations for the IBM LinuxONE platform in terms of cross platform compatibility. For IBM LinuxONE, the supported migrations are with IBM LinuxONE and IBM

Linux on Power platforms only.

For more information, see the My Oracle Support note Dataguard support for


Primary and Physical Standbys in Same Dataguard Configuration

(ID 413484.1).


Oracle on LinuxONE

4.4.3 Considerations when migrating from File System to ASM or vice versa

The following organization types are available for your database files:

򐂰 File System

򐂰 Automatic Storage Management (ASM)

ASM is built into the Oracle kernel and provides the DBA with a way to manage many disk drives for single and clustered instances of Oracle. ASM is a file system/volume manager for all Oracle physical database files (such as, data files, online redo logs, control files, archived redo logs, RMAN backup sets, and SPFILEs). All of the database files (and directories) to be used for Oracle are contained in a disk group.

If you decide to change the way your Oracle database files are organized, you can use RMAN

Backup/Restore capabilities. For more information about migrating databases to and from

ASM by using recovery manager, see the Oracle documentation website .

4.4.4 Best practices

For a successful migration operation, we strongly recommend that you review the best practices that are presented in this section. Most of these practices are based on common sense and some are based on experiences.


Tests are the most critical part of any migration operation.

You should not underestimate the time that is needed for the tests in the migration planning.

Even if the migration lasts a few hours, tests can last several weeks.

As part as your test plan, you must ensure that you have all the resources that are needed for the test phases and the migration in terms of staff.

Backup and monitoring procedures on the new server also must be checked before the final migration.

The validation phase after the migration is critical before switching to the new environment.

A migration also should be considered as a real and potentially complex project. Therefore, project management is key for a successful migration.


For more information, see 4.2, “Sizing”.

Sizing is the only way to ensure that the correct quantity of resources in terms of CPU, memory, I/O, storage is available to run the database on the new platform.

The more accurate the inputs for the sizing are, the more accurate the resource estimation.

Performance measurement before and after migration

To compare the performance before and after migration, we advise that you to take real performance measurements. You want to avoid the possible user complaint that the system was running faster before the migration to this new system without having real figures to compare against.

Chapter 4. Successful consolidation project: From sizing to migration


The following performance measurements can be used:

򐂰 Operating system level:





򐂰 Oracle database level: AWR reports

򐂰 Application level:

– Duration of some batches

– Response time for some complex user transactions

These performance measurements provide the baseline before any migration or operation.

After the database is migrated to the target system, we recommend that you repeat the performance measurements on the target system to compare the results before and after the migration. If some performance issues appear, the new system might need some tuning. For

more information, see Chapter 5, “Performance management for IBM z/VM, Linux, and

Oracle on IBM LinuxONE” on page 63.

Preparing the data before migration

Before any migration, we recommend that you compress the database as much as you can

(including reorganization and data compression), which reduces the data to migrate and the possible downtime. The following tasks are part of preparing the data:

򐂰 Determine the invalid objects in the database

Before migration, you must determine the invalid objects so that you can compare this figure to after the migration. This list of invalid objects facilitates any diagnostic tasks.

Use the command that shown in Example 4-2 to determine the invalid objects.

Example 4-2 Determine invalid objects

SELECT object_name, owner, object_type

FROM dba_objects


ORDER BY object_type;


Rebuild the indexes

To reduce the migration duration (especially if you choose Export/Import utilities), we recommend that you limit the quantity of data to migrate. For example, you can import in two operations: Import the data or import the metadata.

This limitation significantly reduces the migration time, mainly because the indexes and the data are not imported at the same time.

򐂰 Log during migration

If you plan to use Export/Import for your migration, we suggest you use one of the following methods to disable the logging for archiving:

– Disable the archive logs during the import, if possible (for this option, the database must be stopped to be placed in “mount” state).

– Disable the logging directly in to the tablespace by using the

alter tablespace...nologging


– Set the parameters _disable_logging bolean TRUE in the init.ora



Oracle on LinuxONE

򐂰 Redo logs

We also suggest the following tasks to simplify the Redo logs management:

– Minimize the number of Redo log members per thread (one member is sufficient).

– Increase the size of the Redo logs, if possible (to avoid useless switches).

Take advantage of IBM LinuxONE

To get the best level of performance, you need the same degree of parallelism for Export and for Import (number of processors parameter).

In most of the cases, if you run a consolidation project, you have more processors on your source system than on your Linux guest on IBM LinuxONE. You can then use the flexibility of your IBM LinuxONE infrastructure by allocating more CPU and memory during the import phase.

When the import is over, you can adjust the configuration according to your Linux guests needs.

4.5 Successful production

Migration is now successfully completed and your Oracle databases runs smoothly on your

IBM LinuxONE System.

To ensure that your production carries on meeting the SLA, regular monitoring should be

used at the following layers of the application stack (for more information, see Chapter 5.,

“Performance management for IBM z/VM, Linux, and Oracle on IBM LinuxONE”):

򐂰 LinuxONE infrastructure

򐂰 z/VM hypervisor

򐂰 Linux

򐂰 Oracle database

򐂰 Application

Also, a workload analysis highlights growth variation and can help to prepare and anticipate capacity planning.

Chapter 4. Successful consolidation project: From sizing to migration



Oracle on LinuxONE

Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF


Table of contents