Teradata Data Stream Architecture (DSA) User Guide

Teradata Data Stream Architecture (DSA) User Guide
What would you do if you knew?™
Teradata Data Stream Architecture (DSA)
User Guide
Release 15.11
B035-3150-026K
December 2016
The product or products described in this book are licensed products of Teradata Corporation or its affiliates.
Teradata, Applications-Within, Aster, BYNET, Claraview, DecisionCast, Gridscale, MyCommerce, QueryGrid, SQL-MapReduce, Teradata
Decision Experts, "Teradata Labs" logo, Teradata ServiceConnect, Teradata Source Experts, WebAnalyst, and Xkoto are trademarks or registered
trademarks of Teradata Corporation or its affiliates in the United States and other countries.
Adaptec and SCSISelect are trademarks or registered trademarks of Adaptec, Inc.
Amazon Web Services, AWS, [any other AWS Marks used in such materials] are trademarks of Amazon.com, Inc. or its affiliates in the United
States and/or other countries.
AMD Opteron and Opteron are trademarks of Advanced Micro Devices, Inc.
Apache, Apache Avro, Apache Hadoop, Apache Hive, Hadoop, and the yellow elephant logo are either registered trademarks or trademarks of the
Apache Software Foundation in the United States and/or other countries.
Apple, Mac, and OS X all are registered trademarks of Apple Inc.
Axeda is a registered trademark of Axeda Corporation. Axeda Agents, Axeda Applications, Axeda Policy Manager, Axeda Enterprise, Axeda Access,
Axeda Software Management, Axeda Service, Axeda ServiceLink, and Firewall-Friendly are trademarks and Maximum Results and Maximum
Support are servicemarks of Axeda Corporation.
CENTOS is a trademark of Red Hat, Inc., registered in the U.S. and other countries.
Cloudera, CDH, [any other Cloudera Marks used in such materials] are trademarks or registered trademarks of Cloudera Inc. in the United States,
and in jurisdictions throughout the world.
Data Domain, EMC, PowerPath, SRDF, and Symmetrix are registered trademarks of EMC Corporation.
GoldenGate is a trademark of Oracle.
Hewlett-Packard and HP are registered trademarks of Hewlett-Packard Company.
Hortonworks, the Hortonworks logo and other Hortonworks trademarks are trademarks of Hortonworks Inc. in the United States and other
countries.
Intel, Pentium, and XEON are registered trademarks of Intel Corporation.
IBM, CICS, RACF, Tivoli, and z/OS are registered trademarks of International Business Machines Corporation.
Linux is a registered trademark of Linus Torvalds.
LSI is a registered trademark of LSI Corporation.
Microsoft, Active Directory, Windows, Windows NT, and Windows Server are registered trademarks of Microsoft Corporation in the United States
and other countries.
NetVault is a trademark or registered trademark of Dell Inc. in the United States and/or other countries.
Novell and SUSE are registered trademarks of Novell, Inc., in the United States and other countries.
Oracle, Java, and Solaris are registered trademarks of Oracle and/or its affiliates.
QLogic and SANbox are trademarks or registered trademarks of QLogic Corporation.
Quantum and the Quantum logo are trademarks of Quantum Corporation, registered in the U.S.A. and other countries.
Red Hat is a trademark of Red Hat, Inc., registered in the U.S. and other countries. Used under license.
SAP is the trademark or registered trademark of SAP AG in Germany and in several other countries.
SAS and SAS/C are trademarks or registered trademarks of SAS Institute Inc.
Simba, the Simba logo, SimbaEngine, SimbaEngine C/S, SimbaExpress and SimbaLib are registered trademarks of Simba Technologies Inc.
SPARC is a registered trademark of SPARC International, Inc.
Symantec, NetBackup, and VERITAS are trademarks or registered trademarks of Symantec Corporation or its affiliates in the United States and
other countries.
Unicode is a registered trademark of Unicode, Inc. in the United States and other countries.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other product and company names mentioned herein may be the trademarks of their respective owners.
The information contained in this document is provided on an "as-is" basis, without warranty of any kind, either express
or implied, including the implied warranties of merchantability, fitness for a particular purpose, or non-infringement.
Some jurisdictions do not allow the exclusion of implied warranties, so the above exclusion may not apply to you. In no
event will Teradata Corporation be liable for any indirect, direct, special, incidental, or consequential damages,
including lost profits or lost savings, even if expressly advised of the possibility of such damages.
The information contained in this document may contain references or cross-references to features, functions, products, or services that are not
announced or available in your country. Such references do not imply that Teradata Corporation intends to announce such features, functions,
products, or services in your country. Please consult your local Teradata Corporation representative for those features, functions, products, or
services available in your country.
Information contained in this document may contain technical inaccuracies or typographical errors. Information may be changed or updated
without notice. Teradata Corporation may also make improvements or changes in the products or services described in this information at any time
without notice.
To maintain the quality of our products and services, we would like your comments on the accuracy, clarity, organization, and value of this
document. Please e-mail: teradata-books@lists.teradata.com
Any comments or materials (collectively referred to as "Feedback") sent to Teradata Corporation will be deemed non-confidential. Teradata
Corporation will have no obligation of any kind with respect to Feedback and will be free to use, reproduce, disclose, exhibit, display, transform,
create derivative works of, and distribute the Feedback and derivative works thereof without limitation on a royalty-free basis. Further, Teradata
Corporation will be free to use any ideas, concepts, know-how, or techniques contained in such Feedback for any purpose whatsoever, including
developing, manufacturing, or marketing products or services incorporating Feedback.
Copyright © 2013 - 2016 by Teradata. All Rights Reserved.
Table of Contents
Preface.................................................................................................................................................................11
Revision History...........................................................................................................................................................11
Audience........................................................................................................................................................................11
Supported Releases...................................................................................................................................................... 12
Related Documentation.............................................................................................................................................. 12
Product Safety Information........................................................................................................................................ 12
Chapter 1:
Overview...........................................................................................................................................................13
Introduction to Data Stream Architecture............................................................................................................... 13
BAR Integration........................................................................................................................................................... 14
BAR Job Workflow...................................................................................................................................................... 15
Initial BAR Setup and BAR Job Creation Using the BAR Portlets....................................................................... 16
About Permissions.......................................................................................................................................................16
About Restrictions....................................................................................................................................................... 17
Component Deletion...................................................................................................................................................17
Copy and Restore Definitions.................................................................................................................................... 17
Copy Restrictions......................................................................................................................................................... 18
Copy Operations for Objects and Data Table Archives......................................................................................... 18
HUT Locks in Copy and Restore Operations.......................................................................................................... 19
About Backup Types....................................................................................................................................................19
Example of a Backup Strategy........................................................................................................................ 20
Incomplete Backups.........................................................................................................................................21
Active and Retired Jobs............................................................................................................................................... 21
Command-Line Interface and BAR Portlets............................................................................................................21
Multiple DSA Domains...............................................................................................................................................22
Chapter 2:
Teradata BAR Portlets................................................................................................................... 23
BAR Setup..................................................................................................................................................................... 23
Configuring BAR Setup...................................................................................................................................23
General DSA Settings...................................................................................................................................... 24
Systems and Nodes.......................................................................................................................................... 25
Adding or Editing a Teradata System and Node............................................................................. 25
Deleting a System................................................................................................................................. 27
Adding a Node......................................................................................................................................27
Deleting a Node....................................................................................................................................27
Media Servers....................................................................................................................................................27
Adding a Media Server........................................................................................................................ 28
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
3
Table of Contents
Deleting a Media Server...................................................................................................................... 28
Network Masks.....................................................................................................................................28
Backup Solutions..............................................................................................................................................29
Adding a Disk File System.................................................................................................................. 29
Adding a NetBackup Server................................................................................................................30
Adding a DD Boost Server..................................................................................................................30
Deleting a Disk File System................................................................................................................ 31
Deleting a NetBackup Server..............................................................................................................31
Deleting a DD Boost Server................................................................................................................ 31
Target Groups...................................................................................................................................................32
Adding or Copying a Target Group.................................................................................................. 32
Deleting a Remote Group................................................................................................................... 33
Adding or Editing a Restore Group...................................................................................................33
Deleting a Restore Group....................................................................................................................33
Managing the DSC Repository.......................................................................................................................34
Scheduling a Repository Backup........................................................................................................34
Backing Up the Repository................................................................................................................. 34
Restoring the Repository.....................................................................................................................35
Alerts..................................................................................................................................................................35
Configuring Repository Backup Job Alerts...................................................................................... 36
Configuring Job Status Alerts.............................................................................................................36
Configuring Repository Threshold Alerts........................................................................................ 37
Configuring Media Server Alerts....................................................................................................... 37
Configuring System Alerts..................................................................................................................38
BAR Operations........................................................................................................................................................... 38
About the Saved Jobs View.............................................................................................................................38
About the Job Status Filter Bar...........................................................................................................40
About Backup Jobs...............................................................................................................................41
About Restore Jobs...............................................................................................................................42
About Analyze Jobs..............................................................................................................................44
About the Job Settings Tab................................................................................................................. 45
About the Objects Tab.........................................................................................................................48
About the Save Set Version Tab.........................................................................................................49
About the Selection Summary Tab....................................................................................................50
Managing Jobs...................................................................................................................................... 50
Viewing Job Status............................................................................................................................... 56
Viewing Save Sets................................................................................................................................. 59
Viewing Backup IDs............................................................................................................................ 60
About the Job History View........................................................................................................................... 60
Viewing Job History............................................................................................................................ 61
Chapter 3:
Teradata DSA Command-Line Interface..............................................................65
Command-Line Interface Overview..........................................................................................................................65
Accessing the DSA Command-Line Interface............................................................................................. 65
Accessing DSA Command Help.................................................................................................................... 65
DSA Command Types.....................................................................................................................................66
DSA Configuration.......................................................................................................................................... 71
Systems and Nodes.............................................................................................................................. 72
4
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Table of Contents
Media Servers........................................................................................................................................72
Backup Solutions..................................................................................................................................72
Target Groups.......................................................................................................................................73
Deleting a Component........................................................................................................................ 75
Viewing Configuration Information.................................................................................................76
Exporting DSA Component Configuration..................................................................................... 77
Managing the DSC Repository...........................................................................................................77
Planning a Job...................................................................................................................................................79
Creating or Updating a Job.................................................................................................................79
Running a Job....................................................................................................................................... 87
Viewing Job Status............................................................................................................................... 87
Considerations for Aborting a Job.....................................................................................................87
Aborting a Job.......................................................................................................................................88
Exporting a Job..................................................................................................................................... 88
Retiring a Job........................................................................................................................................ 88
Activating a Job.................................................................................................................................... 88
Deleting a Job........................................................................................................................................88
Chapter 4:
Troubleshooting..................................................................................................................................... 89
Log Files.........................................................................................................................................................................89
TVI Logging..................................................................................................................................................................90
BARNC (DSA Network Client) Error Codes...........................................................................................................92
Locked Object Restrictions......................................................................................................................................... 93
Releasing Locked Objects................................................................................................................................93
Appendix A:
Administrative Tasks.......................................................................................................................95
Creating a Diagnostic Bundle for Support............................................................................................................... 95
Time Zone Setting Management............................................................................................................................... 96
Checking Time Zone Setting Status.............................................................................................................. 96
Enabling the Time Zone Setting.................................................................................................................... 97
Disabling the Time Zone Setting................................................................................................................... 97
Managing DBC and User Data in the BAR Portlets............................................................................................... 98
Backing Up DBC and User Data....................................................................................................................99
Restoring DBC and User Data..................................................................................................................... 100
Managing DBC and User Data in the Command Line........................................................................................ 102
Backing Up DBC and User Data..................................................................................................................102
Restoring DBC and User Data..................................................................................................................... 104
Protecting the DSC Repository................................................................................................................................ 106
Recovering the DSC Repository...............................................................................................................................106
Recovering from a Failed or Aborted DSC Repository Restore Job................................................................... 108
Running DSA Jobs Using Crontab..........................................................................................................................108
Scheduling Jobs Using NetBackup.......................................................................................................................... 109
NetBackup Policy Configuration.................................................................................................................109
NetBackup Schedule Policy Configuration................................................................................................ 109
DSA Job Migration to a Different Domain............................................................................................................ 110
Restoring to a Different DSA Domain........................................................................................................110
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
5
Table of Contents
Exporting Job Metadata................................................................................................................................ 112
Querying NetBackup Backup IDs................................................................................................................112
Querying Backup IDs.................................................................................................................................... 113
Importing Job Metadata................................................................................................................................113
Updating a Target Group Map.................................................................................................................... 114
Validating Job Metadata [Deprecated]....................................................................................................... 115
Restoring a Migrated Job.............................................................................................................................. 116
Appendix B:
Rules for Restoring and Copying Objects........................................................119
Selective Backup and Restore Rules........................................................................................................................ 119
Appendix C:
XML Values.................................................................................................................................................. 121
Values for XML Elements.........................................................................................................................................121
Appendix D:
Teradata DSA Commands....................................................................................................... 125
About Using DSA Commands.................................................................................................................................125
abort_job..................................................................................................................................................................... 125
activate_job................................................................................................................................................................. 127
config_aws...................................................................................................................................................................127
config_azure............................................................................................................................................................... 129
config_dd_boost.........................................................................................................................................................130
config_disk_file_system............................................................................................................................................ 131
config_general............................................................................................................................................................ 133
config_media_servers................................................................................................................................................134
config_nbu.................................................................................................................................................................. 135
config_repository_backup........................................................................................................................................ 137
config_systems........................................................................................................................................................... 138
config_target_groups.................................................................................................................................................140
config_target_group_map........................................................................................................................................ 145
consolidate_job_logs................................................................................................................................................. 147
create_job.................................................................................................................................................................... 147
ddls............................................................................................................................................................................... 154
ddrm............................................................................................................................................................................ 155
delete_component......................................................................................................................................................156
delete_job.................................................................................................................................................................... 157
delete_target_group_map.........................................................................................................................................158
disable_component....................................................................................................................................................159
enable_component.................................................................................................................................................... 159
export_config..............................................................................................................................................................160
export_job................................................................................................................................................................... 161
export_job_metadata.................................................................................................................................................162
export_repository_backup_config.......................................................................................................................... 163
export_target_group_map........................................................................................................................................164
6
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Table of Contents
import_job_metadata................................................................................................................................................165
import_repository_backup_config..........................................................................................................................166
job_status.................................................................................................................................................................... 166
job_status_log.............................................................................................................................................................167
list_access_module.....................................................................................................................................................168
list_components......................................................................................................................................................... 169
list_consumers............................................................................................................................................................170
list_general_settings.................................................................................................................................................. 171
list_job_history...........................................................................................................................................................171
list_jobs........................................................................................................................................................................172
list_query_backupids.................................................................................................................................................173
list_query_nbu_backupids........................................................................................................................................174
list_recover_backup_metadata................................................................................................................................ 175
list_repository_backup_settings.............................................................................................................................. 175
list_save_sets...............................................................................................................................................................175
list_target_group_map.............................................................................................................................................. 177
list_validate_job_metadata....................................................................................................................................... 177
object_release..............................................................................................................................................................178
purge_jobs...................................................................................................................................................................179
query_backupids........................................................................................................................................................ 180
query_nbu_backupids............................................................................................................................................... 180
recover_backup_metadata........................................................................................................................................181
retire_job..................................................................................................................................................................... 182
run_job........................................................................................................................................................................ 182
run_repository_job.................................................................................................................................................... 184
set_status_rate............................................................................................................................................................ 185
system_health............................................................................................................................................................. 186
sync_save_sets............................................................................................................................................................ 187
update_job.................................................................................................................................................................. 188
validate_job_metadata.............................................................................................................................................. 189
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
7
Table of Contents
8
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
List of Tables
Table 1: DSA Configuration Commands................................................................................................................. 67
Table 2: DSA Operating Commands.........................................................................................................................69
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
9
List of Tables
10
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Preface
Revision History
Date
Release
Description
December 2016
15.11.03
•
•
•
•
Added config_azure
Updated config_target_groups
Updated list_components
Updated XML Values
August 2016
15.11.02
•
•
•
•
Updated database versions
Added config_aws
Updated config_target_groups
Removed Aster
February 2016
15.11
Updated for the DSA 15.11 release, including information on:
• How to back up, restore, edit, and clone Aster logical and
physical jobs
• Alert messaging, including how to use BAR Setup portlet to
configure alerts
• Log consolidation
• config_aster_systems command
• consolidate_job_logs command
• list_system_status command for Aster systems only
• pause_job command for Aster physical backup jobs
• purge_jobs command
• resume_job command for Aster physical backup jobs
• Configuring Repository Backup Job Alerts
Audience
This guide is intended for use by:
• Database administrators
• System administrators
• Software developers, production users, and testers
The following prerequisite knowledge is required for this product:
• Dual-active systems
• Teradata Database
• Teradata system hardware
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
11
Preface
Supported Releases
Supported Releases
This document supports the following versions of Teradata products.
• Teradata Database:
∘ 15.10
∘ 15.00.02
∘ 14.10.04
• Teradata DSA:
∘ 15.11
• Teradata Viewpoint:
∘ 15.11
∘ 15.10
Related Documentation
Documents are located at http://www.info.teradata.com.
Title
Publication ID
Teradata BAR Backup Application Software Release Definition, B035-3114
Summarizes new features and fixed issues associated with the release.
B035-3114
Data Stream Extensions Installation, Configuration, and Upgrade Guide for Customers,
B035-3151
Describes how to configure Data Stream Extensions software and devices.
B035-3151
Teradata Viewpoint User Guide, B035-2206
Describes the Teradata Viewpoint portal, portlets, and system administration features.
B035-2206
Teradata Viewpoint Installation, Configuration, and Upgrade Guide for Customers,
B035-2207
Describes how to install Viewpoint software, configure settings, and upgrade a
Teradata Viewpoint server.
B035-2207
Product Safety Information
This document may contain information addressing product safety practices related to data or property
damage, identified by the word Notice. A notice indicates a situation which, if not avoided, could result in
damage to property, such as equipment or data, but not related to personal injury.
Example
Notice:
Improper use of the Reconfiguration utility can result in data loss.
12
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
CHAPTER 1
Overview
Introduction to Data Stream Architecture
Teradata Data Stream Architecture (DSA) enables you to back up and restore data from your Teradata
database using Teradata Viewpoint portlets: BAR Setup and BAR Operations. The portlets provide user
interfaces to Teradata DSA that are similar to other Teradata ecosystem components. This integration
leverages Viewpoint account management features and enhances usability. Teradata DSA also provides a
command-line utility that you can use to configure, initiate, and monitor backup and restore jobs.
Teradata DSA is an alternative to the ARC-based BAR architecture that uses the Teradata Tiered Archive/
Restore Architecture (TARA) user interface. It provides potentially significant improvement in performance
and usability. Teradata DSA can co-exist with ARC-based BAR applications on the same BAR hardware,
although the resulting backup files are not compatible with both tools. ARC cannot restore a DSA backup
job. However, you can migrate the object list from an existing ARC script into a DSA backup job.
Data Stream Extensions and Data Stream Utility
Beginning with DSA 15.10, the product has been rebundled into two components: Data Stream Utility
(DSU) and Data Stream Extensions (DSE).
DSE offers BAR portlet and command-line functionality, plus support for third-party backup applications
such as Veritas NetBackup. DSE is equivalent to the Teradata DSA product prior to 15.10.
DSU offers BAR portlet and command-line functionality, but does not offer third-party backup application
support. DSU is a solution offered for sites without a need for the extended footprint offered by Teradata
DSE. DSU is for use only with Teradata databases. DSU uses disk file systems, EMC Data Domain, AWS S3,
or Azure Blob as backup storage devices, offering a simple and affordable way to implement DSA software.
In a typical use case, the DSA Network Client is installed on the Teradata nodes, the DSC server is provided
in a VM format, and a simple NFS environment is set up for use as a storage location for the backup files. A
managed storage server can also act as a host server to the NFS environment if needed. When a Data
Domain unit is used, EMC Data Domain Boost™ for Data Stream Utility (DD Boost) allows a direct
connection to the unit without using NetBackup.
Server Functionality
Server functionality includes the following servers:
• DSC server, which controls all BAR operations. A DSC server must have the Data Stream Controller
(DSC) installed.
• Media server, which writes to the target storage device. A media server must have the ClientHandler
component installed.
A machine in a DSA configuration can include one or more different types of server functionality. For
example, the managed storage server in a DSU configuration functions as disk storage, the DSC server, and a
media server. In another configuration, the DSC server could be a standalone server.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
13
Chapter 1: Overview
BAR Integration
Backup Solutions
DSA backup solutions can include any of the following:
•
•
•
•
•
Data Domain
Disk file system
Third-party backup application software, such as NetBackup
AWS S3
Azure Blob
BAR Integration
Teradata Data Stream Architecture (DSA) features a Data Stream Controller (DSC) that controls BAR
operations and enables communication between DSMain, the BAR portlets, and the DSA Network Client.
Teradata DSA records system setup information and DSA job metadata in the DSC Repository.
Data Stream Controller (DSC)
The Data Stream Controller (DSC) controls all BAR operations throughout an enterprise environment.
The DSC is notified of all requested BAR operations and manages resources to ensure optimal system
backup and restore job performance.
DSC Repository
The DSC repository is the storage database for job definitions, logs, archive metadata, and hardware
configuration data. The DSC manages the repository using JDBC and is the only client component that
can update the repository metadata.
DSMain
DSMain runs on the Teradata nodes and receives job plans from the DSC. Job plans include stream lists,
object lists, and job details. DSMain tracks the stream and object progress through the backup and
restore process and communicates with the DSA Network Client.
DSA Network Client
The DSA Network Client controls the data path from DSMain to the storage device and verifies
authentication from the database. The DSA Network Client then opens the connection to the
appropriate device or API.
JMS Broker
Communication between the DSA components is performed using a JMS broker.
BAR Portlets
The BAR Setup and BAR Operations Viewpoint portlets manage the DSA configuration and job
operations.
DSA Command-Line Interface (CLI)
The DSA command-line interface provides an alternative to the BAR portlets. The DSA command-line
interface allows job launch, monitoring, and scheduling capabilities. It also provides commands to
define DSA configuration.
14
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Chapter 1: Overview
BAR Job Workflow
Figure 1: DSA Components
Notice:
A CAM daemon resides on the Viewpoint server shown in the diagram. The CAM daemon is part of the
alert messaging system. See Alerts for more information.
BAR Job Workflow
The BAR Operations portlet or DSA command-line interface communicates with the DSC when a backup,
restore, or analyze job is created or run. The DSC controls the job flow by sending the job processing
instructions to the appropriate DSA component. The DSC receives job status information from the DSA
component and also notifies the database and other client applications of any action taken on a specific job.
The job definition is stored in the DSC repository.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
15
Chapter 1: Overview
Initial BAR Setup and BAR Job Creation Using the BAR Portlets
Figure 2: DSA Data Path
Initial BAR Setup and BAR Job Creation Using
the BAR Portlets
Setting up your BAR environment is a prerequisite for backing up your Teradata Database. The systems
configured and enabled in the BAR Setup portlet are available in the BAR Operations portlet. BAR setup
configurations include systems and nodes, media servers, backup applications, target groups, and alerts.
These setup configurations are stored in the DSC repository, which manages BAR operations.
After configuring your BAR environment, you can use the BAR Operations portlet to create jobs, manage
job settings, and monitor job progress.
Related Information
Configuring BAR Setup
Managing Jobs
About Permissions
Users in a Viewpoint role that has been granted access to the BAR Setup portlet can use the portlet to add,
remove, or edit the following resources in a BAR system configuration:
•
•
•
•
Teradata Database systems
Media Servers
Backup Solutions
Target Groups
Viewpoint administrators can grant the BAR administrators privilege to any role. A BAR administrator has
permissions to run all BAR Operations.
16
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Chapter 1: Overview
About Restrictions
Users who are not BAR administrators are only able to perform those actions on their own jobs, unless the
job owner gives permission to that user.
About Restrictions
Teradata DSA currently has the following limitations:
•
•
•
•
•
•
•
•
•
•
There can only be one target device (tape drive or disk) per NetBackup policy.
Multi-byte characters are not supported in the DSA command-line interface.
Backup and restore jobs cannot be run when an AMP is down.
Backup and restore jobs are subject to a database lock limit of up to 5,000 database objects for DBS
releases 15.0 and below. This limitation no longer applies in DBS release 15.10.
The same user ID can run only one restore job at a time. If the user is already logged on and is running a
BAR operation (including legacy BAR jobs), a DSA restore job will be aborted.
If the Teradata Database system restarts during an archive or restore operation, the host utility lock will
remain on the remaining unprocessed objects. The user must release the lock manually.
The number of AMP Worker Tasks (AWT) dictates the number of DSA jobs that can run in parallel to
allow for parallelism during restore.
A maximum of three concurrent restore jobs can be run on a system.
Up to 20 backup jobs can be run concurrently (based on 80 AWT).
DSA does not support any Teradata Database version that is higher than the DSA version.
Component Deletion
A BAR component is an entity or defined relationship, such as a media server configuration, that is
associated with a Teradata DSA job. A BAR component cannot be deleted from the Teradata DSA
configuration if it is specified in a job, regardless of whether the job is in an active or retired state. Generally,
in order for a component to be deleted, any job that references the component must be deleted first. There is
one exception: If the only reference to the BAR component is in a new job that has never been run, the
component can be deleted.
Copy and Restore Definitions
The difference between copy and restore depends on the operation being performed.
A restore operation moves data to one of the following locations:
• From archived files back to the same Teradata Database from which it was archived
• To a different Teradata Database if the DBC database from the source system is already restored to the
destination system
A copy operation moves data from an archived file to any existing Teradata Database and creates a new
object if the object does not already exist on that target database.
You can copy an object to create a new object with a different name, or keep the same name as the source
object. In database-level copy operations, you can copy to a database with a different name or maintain the
same name as the source.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
17
Chapter 1: Overview
Copy Restrictions
Copy Restrictions
To use the copy operation, the following conditions must be met:
• Restore access privileges on the target database or table are required.
• A target database must exist to copy a database or an individual object.
• When copying a table that does not exist on the target system, you must have CREATE TABLE and
RESTORE database access privileges for the target database.
No Support for Copying DBC Database
Teradata DSA does not support copying the DBC database, which must be restored after a successful
Teradata Database system initialization. Refer to Restoring DBC and User Data for the process to restore the
DBC database.
No Support for Copying SYSUDTLIB Database
Teradata DSA does not support copying the SYSUDTLIB database, which is restored when the DBC
database is restored.
Restriction on Copying TD_SERVER_DB Database
In Teradata Database 15.10 and higher, the TD_SERVER_DB database cannot be copied to a different name.
Copy Operations for Objects and Data Table
Archives
Copy Operations for Large Objects (LOBs)
You can copy tables that contain large object columns. You can also copy large object columns to a system
that uses a hash function that is different from the hash function used for the copy operation.
Copy Operations for Join Indexes and System Join Indexes (SJIs)
You can restore and copy join indexes, hash indexes, and system join indexes to the same name. You can
also copy system join indexes to a different database name.
Copy Operations for Data Table Archives
When you copy a data table to a new environment, Teradata DSA creates a new table or replaces an existing
table on the target Teradata Database.
If the target database:
• Does not have a table with the same name as the table you are archiving, a new table is created.
• Has a table with the same name, the table is replaced. The existing table data and table definition on the
target database are replaced by the data from the archive.
When table data is copied, the following changes are allowed:
• Changing a fallback table to a non-fallback table
• Changing the data temperature
• Changing the data compression
18
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Chapter 1: Overview
HUT Locks in Copy and Restore Operations
HUT Locks in Copy and Restore Operations
Restore and copy operations apply exclusive utility (HUT) locks on all objects to be restored or copied.
SQL User-Defined Function
When you copy an SQL User-Defined Function, the corresponding DBC.DBASE row is locked with a write
hash lock. This prevents the user from using any Data Definition Language on that database for the
dictionary phase of the copy operation.
About Backup Types
The Teradata incremental backup and restore feature is available when running the combination of Teradata
DSA 14.11 (or later) and Teradata Database 14.10 (or later). Teradata implements incremental database
backup using the Changed Block Backup (CBB) feature. With CBB, a Teradata Database System will only
back up data blocks that have changed since a prior backup operation. This can greatly reduce the time and
storage required to perform backups, at the cost of an increase in overall restore time. Overall restore time is
increased because DSA has to read multiple datasets from disk or tape media and construct the complete
dataset to restore. Incremental backup is applicable to both standard backup and online archive.
Incremental backup is appropriate for:
• Databases and tables that have a very low change rate compared to table size
• Primary Partition Index (PPI) tables for which changes are limited to one or few partitions
The incremental backup feature allows three types of backups: full, delta, and cumulative.
Backup Types
Full
A full backup archives all data from the specified objects. This backup takes the longest time to complete,
and uses the most backup storage space. However, a full backup has the shortest restore time, since all
data required to restore the objects will be contained within a single backup image.
Note:
A full backup must be initially performed prior to any other type of backup. The full backup will be
used as a baseline for further incremental steps.
Delta
A delta backup archives only the data which has changed since the last backup operation. This backup
will complete in the shortest time and use the least storage space. However, a delta backup will increase
the time to restore the database, as it potentially adds many backup images that must be processed before
a set of objects can be fully restored.
Cumulative
A cumulative backup archives the data which has changed since the last full backup was run. This
backup type consolidates changes from multiple delta backups or cumulative backups before a full
backup is run. A cumulative backup has a shorter database restore time than a series of delta or
cumulative backups, and it takes less time and space than a full backup.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
19
Chapter 1: Overview
About Backup Types
Guidelines for Incremental Backups
If any save set is removed from the DSC repository, any subsequent run dependent on the removed save set
is invalid. A save set would be removed, for example, if it was expired on the Backup Solution prior to the
sync_save_sets command being run.
Regardless of the type of incremental backup performed, the dictionary information for all objects is fully
backed up. This ensures that all non-data objects and object definitions are fully recovered to the point in
time in the event of a restore from any increment.
In the event of a restore or analyze_validate, you select the backup image corresponding to the point in time
to which the objects should be restored. This can be a full, delta, or cumulative backup image. For a given
restore scenario (point in time), the following images are processed, relative to the selected backup image:
• The most recent full backup
• The most recent cumulative backup, if any. Only if newer than the full backup.
• Any delta backups after the most recent full or cumulative, and the selected restore point in time
In the event of an analyze_read, only the selected save set is analyzed.
Note:
Running a cumulative or delta incremental backup of a DBC ALL backup job does not include the DBC
system tables. The DBC database is used when you need to restore the whole system after a system
initialization (sysinit). Therefore, run a separate FULL backup of the DBC database for every incremental
backup job cycle run.
Allowing Incremental Jobs Based on Full or Cumulative Backup Jobs
Completed with Errors
See the Usage Notes in config_systems.
Example of a Backup Strategy
Consider a site that performs a full backup every Sunday, a cumulative backup every Wednesday, and delta
backups on the other days:
Day
Sunday
Backup Type F
Monday
Tuesday
Wednesday
Thursday
Friday
Saturday
D
D
C
D
D
D
The full backup every Sunday contains all of the data for all of the tables. The delta backups on Monday and
Tuesday contain only changed data blocks for those particular days. The Wednesday cumulative backup
contains all changes from the Monday and Tuesday delta backups, plus any new changes. The Thursday,
Friday, and Saturday delta backups contain only changes on each of those days.
If the site were to perform a restore of the delta backup image produced on a Friday, the following images
would be restored:
•
•
•
•
20
The full backup from the prior Sunday
The cumulative backup from the prior Wednesday
The delta backup from Thursday
The delta backup from Friday
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Chapter 1: Overview
Active and Retired Jobs
Incomplete Backups
The integrity of the database data is compromised if any part of the incremental backup is lost or corrupted.
If any delta, cumulative, or full image required for a restore is missing or corrupt, a restore from any
dependent backup image fails.
Incomplete backups are not subject to this limitation. An incomplete backup occurs if any incremental
backup completed with errors, or was aborted and not re-run. In the event of a failed backup, prior and
subsequent incremental backups are not affected. Similarly, when a backup completes with non-fatal errors,
prior and future incremental backups do not use the backup image that received an error. Instead,
subsequent incremental backup jobs use the most recent successful backup as the base. Therefore, it is
important to fix the underlying cause of any error that occurs during incremental backups, and to re-run the
incremental backup at the next available opportunity.
The following other situations may require that a new full backup be generated before any further delta or
cumulative backups be run:
• The system has gone through SYSINIT and/or a full database container (DBC) restore since the most
recent full backup
• The system has had an access module processor (AMP) reconfiguration or rebuild since the most recent
full backup
• The object list in the backup has been changed
• The dictionary or data phase in the backup job has been changed
• The target group in the backup job has been changed
• Check retention invalidates or removes the last full save set in the DSC repository
• The backup job is in the NEW state
The following backup jobs are always run as a full backup:
• Any DBC only backup job
• Any backup job in dictionary phase
Active and Retired Jobs
An active job refers to any job that is not scheduled to be deleted. Active jobs can be run from the BAR
Operations portlet or the DSA command-line interface. When the job has a deletion date, it is considered
retired and cannot be run. You can still access the job history, which is the log of each specified job run, until
the deletion date of the job. The job and job history are deleted from the DSC repository at the deletion date
of the job.
A DSC repository job is specifically designed to back up or restore the DSC repository. A DSC repository job
does not have an active or retired designation, so DSC repository jobs are always considered active and
cannot be retired.
Command-Line Interface and BAR Portlets
Teradata DSA architecture has two user interfaces. The first interface, the BAR Setup and BAR Operations
portlets, are Viewpoint portlets for DSA configuration and management. The second interface is a standard
command-line interface that provides all of the functionality as the portlets provide, including DSA
component configuration.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
21
Chapter 1: Overview
Multiple DSA Domains
Notice:
Enabling General > Security Management in the BAR Setup portlet increases security for the DSA
environment. This setting requires users to provide Viewpoint credentials to execute some commands
from the DSA command-line interface.
A DSA administrator might consider using the DSA command-line interface rather than the BAR portlets
for specific situations, such as the following:
•
•
•
•
Job scheduling, because this cannot be administered in the BAR Operations portlet.
Exporting an XML file associated with a job created in the portlet.
Streamlining the updates of multiple job definitions.
Using scripts to automate DSA commands.
Because BAR portlets are optimized to use caching in order to minimize the impact on DSC, if you use job
scripting automation through the DSA command-line interface, there should be a 30 second interval
between DSC command requests.
Multiple DSA Domains
If your site has multiple DSA domains, the BAR admin user can export metadata from DSA domain A and
import it later to DSA domain B using the DSA command-line interface. This migration is performed for
each job and is necessary before any restore operation can be done on the target DSA domain B. As part of
the migration, the administrator is responsible for transferring the related information for the NetBackup
catalog for DSE operations. For DSU, the administrator is responsible for the management of related files on
the disk file system.
Related Information
DSA Job Migration to a Different Domain
Exporting Job Metadata
Importing Job Metadata
Validating Job Metadata [Deprecated]
22
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
CHAPTER 2
Teradata BAR Portlets
BAR Setup
You can use the BAR Setup portlet to designate the hardware and software to use when backing up your
database. Use this portlet to configure the following:
•
•
•
•
•
•
Systems and nodes
Media servers
Third-party backup software, such as NetBackup
Hardware and software groups to use as targets for backup operations
Logical mappings between different target groups for restoring to different client configurations
Custom alerts (the Alert Setup portlet is used in addition to the BAR Setup portlet to configure the
alerts)
After the configuration is complete, the BAR Setup portlet employs the DSC Repository to save all of your
configuration settings. These configured systems, media servers, backup solutions, and target groups are
available for use in backup, restore, and analyze jobs. You can view the alerts that you configure in the BAR
Setup and Alert Setup portlets through text, email, and the Alert Viewer portlet.
Configuring BAR Setup
This task outlines configuration tasks involved in the BAR Setup portlet to make systems, media servers,
backup applications, and target groups available in the BAR Operations portlet.
1. Adding or Editing a System and Node Configuration
2.
3.
4.
5.
6.
Notice:
Add and enable systems in the Monitored Systems portlet to make them available in the BAR Setup
portlet.
Adding a Media Server
A media server must be defined so it can be made available for target groups.
Adding a Disk File System
To back up and restore data using a Disk File system, you must add and configure the disk file system
using the BAR Setup portlet.
Adding a NetBackup Server
To back up and restore data using the NetBackup third-party backup application, you must add and
configure a NetBackup server using the BAR Setup portlet.
Adding a DD Boost Server
To back up and restore data using the DD Boost third-party backup application, you must add and
configure a DD Boost server using the BAR Setup portlet.
Adding or Copying a Target Group
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
23
Chapter 2: Teradata BAR Portlets
BAR Setup
In order for data to be backed up to a device, a target group must be created to configure media servers to
the backup application.
7. General DSA Settings
Allows you to set general DSA settings.
8. Scheduling a Repository Backup
Describes how to schedule a backup of the DSC repository.
General DSA Settings
The BAR Setup portlet General category allows you to set the following DSA settings: threshold, security
management, log levels, and how long to keep retired jobs.
Setting
Description
DSC Repository Warning
Thresholds
Specifies the maximum amount of data to store in your DSC repository. A
repository size below 85 percent of the threshold is a normal state for BAR
operations. After 85 percent of the size threshold is met, warning messages
are generated. After 95 percent of the size threshold is met, all BAR jobs that
create more data on the repository receive an error message and are not
permitted to run. The repository database perm space needs to be increased
or jobs will have to be deleted in order to continue using DSA at that point.
Notice:
After increasing perm space, restart the DSC service so that the change
takes effect immediately.
Security Management
Enables Teradata Viewpoint authentication on the DSA command-line
interface. If checked, a user submitting certain commands from the
command-line interface is required to enter a valid Teradata Viewpoint user
name and password.
BAR Logging
Specifies the level of BAR log information to display for the Data Stream
Controller and the DSA Network Client. Extensive logging information is
typically only useful for support personnel when gathering information
about a reported problem.
Error
Enables minimal logging. Provides only error messages. This is the
default setting.
Warning
Adds warning messages to error message logging.
Info
Provides informational messages with warning and error messages to
the job log.
Debug
Enables full logging. All messages, including debug, are sent to the job
log.
24
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Chapter 2: Teradata BAR Portlets
BAR Setup
Setting
Description
Notice:
This setting can affect performance.
Delete Retired Jobs
Specifies the deletion settings used when backup jobs are retired in the BAR
Operations portlet.
After
Sets the number of days from the date a job is retired to wait before
deleting the job.
Never
Prevents deletion of retired jobs.
Systems and Nodes
You can add, configure, and set stream limits for systems and nodes in the BAR Setup portlet and by using
DSA setup commands from the DSA command-line interface. After you enable configured systems, they are
available for backup and restore jobs in the BAR Operations portlet and for DSA operation commands.
Adding or Editing a Teradata System and Node
Prerequisite
Add and enable Teradata Database systems in the Monitored Systems portlet to make them available in
the BAR Setup portlet.
You must configure the systems, nodes, backup solutions, and target groups in the BAR Setup portlet to
make them available in the BAR Operations portlet.
1. From the Categories list, click Systems and Nodes.
2. Click next to Systems.
3. Select Add Teradata System.
The System Details screen allows you to enter the information for the system.
4. For System Details, enter the following:
Option
Description
System Name
Choose the system from the drop-down list.
Note:
You can add a system from the Monitored Systems portlet.
System Selector
[Optional] When editing a system, to change the system selector, click Update.
The credentials to the system are verified before the update can occur.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
25
Chapter 2: Teradata BAR Portlets
BAR Setup
Option
Description
Note:
You must stop and start DSMain in Teradata Database after changing the
system selector.
SSL Communication
[Optional] Select the Enable SSL over JMS Communication checkbox to
enable SSL communication.
Note:
You must add the TrustStore password created during SSL setup. You must
stop and start DSMain in Teradata Database after enabling SSL
communication.
Database Query
Method
Choose BASE_VIEW or EXTENDED_VIEW. EXTENDED_VIEW allows for
extra database security, but may affect system performance.
Default Stream Limits Set the default limits for each node configured with the system. For each node
For Nodes
is the maximum amount of concurrent steams allowed per node. For each job
on a node is the maximum amount of concurrent streams allowed for each job
on the node.
5. Click Apply.
6. Add a Node to the system configuration.
Notice:
All nodes, including channel and standby nodes, must be included in system configuration.
7. Click Activate System in the System Details view.
Note:
Do not click Apply. Clicking Apply will de-activate the system. The exception to this is if 30 minutes
has elapsed since you have added the system or node.
Note:
After enabling SSL communication, stop and then restart DSMain in the Teradata Database.
8. Using the following commands, restart DSMain on the target system from the Database Window (DBW)
consoles supervisor screen:
a) Enter start bardsmain –s (this stops DSMain).
b) Enter start bardsmain (this starts DSMain).
9. After bardsmain has started, click Activate System in the System Details view to enable the system and
node configuration in the BAR Operations portlet.
Note:
The repository backup system is preconfigured on the portlet, but you must run the Update on the
System Selector, restart bardsmain on the DSC repository, and then click Apply before you activate
the system for use.
26
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Chapter 2: Teradata BAR Portlets
BAR Setup
Deleting a System
Use the following steps to delete a system from the BAR Setup portlet, which removes it as a source for
restore or backup jobs from the BAR Operations portlet.
Note:
You cannot delete a system if it is in use by a job or the system is marked for repository backup.
1. From the Categories list, click Systems and Nodes.
next to the system to be deleted.
2. From the Systems list, click
A confirmation message appears.
3. Click OK.
Adding a Node
1. From the Categories list, click Systems and Nodes.
2. From the Systems list, click a system.
3. From the Setup list, click Nodes.
4. Click next to Nodes.
5. Enter a Node Name.
6. Enter a node address in the IP Address box.
7. [Optional] Add and remove addresses by clicking the and
buttons.
8. [Optional] Enter a stream limit for each node and for each job on the node.
Note:
A default limit is set by the system configured with the node. Setting the stream limits to 0 for channel
and standby nodes is not required.
9. Click Apply.
Deleting a Node
1. From the Categories list, click Systems and Nodes.
2. From the Systems list, click the system to which the node is attached.
3. From the Setup list, click Nodes.
next to the node you want to delete.
4. From the Nodes list, click
A confirmation message appears.
5. Click OK.
Media Servers
Media servers manage data during system backups and restores. Media servers are made available to your
BAR environment as soon as the DSA software is installed and running. DSA administrators can then add or
delete media servers to their BAR configuration, and assign media servers to target group configurations, in
the BAR Setup portlet or through the command-line interface by using DSA setup commands
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
27
Chapter 2: Teradata BAR Portlets
BAR Setup
Adding a Media Server
Media servers manage data during backup jobs.
1. From the Categories list, click Media Servers.
2. Click next to Media Servers.
3. Enter a Media Server Name.
4. Verify that the BAR NC Port number of the BAR network server matches the server port setting in the
DSA client handler property file.
The default port is 15401.
Note:
If you change the port number in the clienthandler.properties file, you must restart the DSA
Network Client with the restart-hwupgrade option.
5. Enter an address in the IP Address box.
This is the address of the media server. Additional addresses can be entered for network cards that are
attached to the server. If there are multiple instances of DSA Network Client, specify separate IP
addresses. For example, configure the first DSA Network Client instance with the first IP address and the
second DSA Network Client instance with the second IP address.
Note:
IP addresses are not validated.
6. Enter an address in the Network Mask box.
Refer to Network Masks for more information.
7. [Optional] Add and remove addresses by clicking the
8. Click Apply.
and
buttons.
Deleting a Media Server
You can delete a media server from the BAR Setup portlet so that it is unavailable for target groups in the
BAR Operations portlet.
Note:
You cannot delete a media server that is currently configured to a target group.
1. From the Categories list, click Media Servers.
2. Click
next to the media server you want to delete.
3. Click OK.
Network Masks
The subnet mask used by DSA is a logical mask, that is, it is treated as a mask to determine what connection
paths are allowed between Teradata nodes and media servers. You can use a DSA network mask to create a
data path between a Teradata node and a media server if the Teradata node and media server are physically
connected. The DSA network mask setting does not override any physical subnet mask.
Use the default Network mask populated by DSA that is based on the data path between Teradata Nodes &
Media Servers.
28
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Chapter 2: Teradata BAR Portlets
BAR Setup
Note:
Do not change the default Network mask. If there is no matching physical network connection, the job
will be able to submit but still fail in the end.
Note:
Remove the network interfaces not used in the data path in the DSC Setup portlet for media server.
Guidelines for DSA Network Mask
• Teradata nodes and media servers should be on the same physical subnet.
Different Physical Subnets
• If Teradata nodes and media servers are on different physical subnets, but can communicate with each
other, open up the network mask as relevant.
Backup Solutions
Backup solutions are third-party software options that transfer data between a storage device and a database
system. You can configure the third-party server software to:
• Indicate the media server on which the third-party backup software is located
• Customize setup options for each server
Adding a Disk File System
When using a disk file system to back up and restore data, you must add and configure the disk file system
using the BAR Setup portlet.
Note:
System names and open file limits are tied to media servers during the target group configuration.
1. From the Categories list, click Backup Solutions.
2. From the Solutions list, click Disk File System.
3. From the Disk File System Details screen, click to add a disk file system.
4. Configure the following:
a) Enter a File system name and path with no spaces.
Note:
File system names must be unique, fully qualified path names that begin with a forward slash, for
example, /dev/mnt1.
Note:
The disk file system used by the repository target group cannot be used by the operational target
group, and vice versa.
Note:
File system names cannot differ by case alone. For example, both /dev/mnt1 and /dev/Mnt1
cannot be configured.
b) Enter the maximum number of open files allowed.
c) Click
to add additional disk file system names and open file limits.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
29
Chapter 2: Teradata BAR Portlets
BAR Setup
5. Click Apply.
Adding a NetBackup Server
When using a NetBackup server to back up and restore data, you must add and configure the NetBackup
server using the BAR Setup portlet.
Note:
NetBackup policies are tied to media servers during the target group configuration. It is important that
the policies entered for your NetBackup configuration coincide with the policies intended for the media
server configuration mapped as a target.
1. From the Categories list, click Backup Solutions.
2. From the Solutions list, click NetBackup.
3. Click next to Servers.
4. From the NetBackup Details screen, configure the following
Option
Description
General • Enter a Nickname using alphanumeric characters and underscores, but no spaces.
• Enter the server IP address or DNS in Server Name (IP/DNS).
Policies • Enter a Policy Name.
Note:
Policy names are case-sensitive.
• Enter the number of Storage Devices associated with the policy.
You can use alphanumeric characters and underscores, but no spaces.
5. Click Apply.
Adding a DD Boost Server
When using a DD Boost server to back up and restore data, you must add and configure the DD Boost
server configuration information using the BAR Setup portlet.
Note:
DD Boost storage units are tied to media servers during the target group configuration. It is important
that the storage units entered for your configuration coincide with the storage units intended for the
media server and device configuration mapped as a target.
1. From the Categories list, click Backup Solutions.
2. From the Solutions list, click DD Boost.
3. Click next to Servers.
4. From the DD Boost Details screen, configure the following:
30
Option
Description
General
• Enter a Nickname using alphanumeric characters and underscores, but no spaces.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Chapter 2: Teradata BAR Portlets
BAR Setup
Option
Description
• Enter the server IP address or DNS in Server Name (IP/DNS).
• Enter the data domain DD Boost credentials in User and Password.
Storage
Units
• Enter a Storage unit name that matches a data domain storage unit intended for the
media server and device configuration mapped as a target.
Note:
DSC does not support the same storage unit name across different DD Boost
servers.
• Enter the maximum number of open files allowed.
• Click
to add additional storage unit names and open file limits.
5. Click Apply.
Deleting a Disk File System
Deleting a disk file system disassociates the server and its settings from the BAR Setup portlet. The server
can no longer be used as a target for backups in the BAR Operations portlet.
Note:
If a disk file system is currently configured to a target group, it cannot be deleted.
1. From the Categories list, click Backup Solutions.
2. From the Solutions list, click Disk File System.
3. Under Disk File System Details, click
A confirmation message appears.
4. Click OK.
next to the server you want to delete.
Deleting a NetBackup Server
Deleting a NetBackup server disassociates the server and its settings from the BAR Setup portlet. The server
can no longer be used as a target for backups in the BAR Operations portlet.
Note:
If a NetBackup server is currently configured to a target group, it cannot be deleted.
1. From the Categories list, click Backup Solutions.
2. Under Solutions, click NetBackup.
3. Under Servers, click
next to the server you want to delete.
A confirmation message appears.
4. Click OK.
Deleting a DD Boost Server
Deleting a DD Boost server disassociates the server and its settings from the BAR Setup portlet. The server
can no longer be used as a target for backups in the BAR Operations portlet.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
31
Chapter 2: Teradata BAR Portlets
BAR Setup
Note:
If a DD Boost server is currently configured to a target group, it cannot be deleted.
1. From the Categories list, click Backup Solutions.
2. From the Solutions list, click DD Boost.
next to the server you want to delete.
3. Under Servers, click
A confirmation message appears.
4. Click OK.
Target Groups
Target groups are composed of media servers and devices used for storing backup data. DSA administrators
create target groups, and assign media servers and devices. Target groups are then accessible to BAR backup
jobs.
After a backup job has run to completion, you can create a BAR restore job to restore data using the same
target group as the backup job. You can also create a target group map, which allows a BAR restore job to
restore data from a different target group.
Adding or Copying a Target Group
The data from Teradata Database systems is sent through media servers to be backed up by backup
solutions. These relationships are defined in target groups, which you can create and copy.
1. From the Categories list, click Target Groups.
2. From the Target Groups list, click Remote Groups.
3. Do one of the following:
Option
Description
Add
Click
to add a remote group.
Copy
Click
in the row of the remote group you want to copy.
4. [Optional] Select the Use this target group for repository backups only checkbox to enable this
restriction.
5.
6.
7.
8.
Note:
This target group cannot be deleted or used for other jobs.
Enter a Target Group Name for the new target group.
You can use alphanumeric characters, dashes, and underscores, but no spaces.
[Optional] Select the Enable target group checkbox to enable the remote group.
Select a Solution Type.
In the Targets and the Remote Group Details section, do the following:
Option Description
Add
32
Click to add; policies and devices, storage units and open files limit, or disk file systems
and open files limit.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Chapter 2: Teradata BAR Portlets
BAR Setup
Option Description
Remove Click to remove; policies and devices, storage units and open files limit, or disk file systems
and open files limit.
9. Click Apply.
Deleting a Remote Group
Any target group except a repository target group can be deleted if it is not being used by a job in the BAR
Operations portlet. Repository target groups cannot be deleted. The target in a target group cannot be
deleted if a job has used the target group.
1. From the Categories list, click Target Groups.
2. From the Target Groups list, click Remote Groups.
3. From the Remote Groups list, click
A confirmation message appears.
4. Click OK.
next to the name of the remote group you want to delete.
Adding or Editing a Restore Group
The device and media servers relationships defined in target groups can be selected to create target group
maps called restore groups.
1. From the Categories list, click Target Groups.
2. From the Target Groups list, click Restore Groups.
3. Next to Restore Groups, do one of the following:
Option
Description
Add
Click
to add a restore group.
Edit
Click
in the row of the restore group you want to edit.
4. Select the Solution type from the list.
5. Select the Backup Target Group from the list.
a) [Optional] Click next to the BAR media server associated with the backup target group to view
policy and device details.
6. Select the Restore Target Group from the list.
a) [Optional] Click next to the BAR media server associated with the restore target group to view
policy and device details.
7. Click OK.
8. Click Apply.
Deleting a Restore Group
You can delete a restore group if it is not used by a job in the BAR Operations portlet.
1. From the Categories list, click Target Groups.
2. From the Target Groups list, click Restore Groups.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
33
Chapter 2: Teradata BAR Portlets
BAR Setup
3. From the Restore Groups list, click
A confirmation message appears.
4. Click OK.
next to the name of the remote group you want to delete.
Managing the DSC Repository
DSA configuration settings and job metadata are stored in the Data Stream Controller (DSC) Repository.
You can automate a repository backup or initiate the backup manually.
A repository backup job backs up your DSC data to a target group. Any running DSC repository job
(backup, restore, or analyze) prevents jobs from being submitted and DSA configuration settings from being
changed.
Configuration settings and DSC metadata can be restored to the DSC repository from a storage device.
Notice:
If you abort a DSC repository restore job while the job is in progress or if the restore job fails, DSC
triggers a process to restore all repository tables to their initial state, which is an empty table. The current
data in the DSC repository would be lost.
Notice:
Before you can recover the DSC Repository, a DSC repository backup job and an export of the repository
backup configuration must have been completed successfully at least once. The export of the repository
backup configuration can only be performed using the DSA command line.
Notice:
Failure to perform a successful repository backup and an export of the repository backup configuration
results in an unrecoverable DSC repository in the case of a complete disaster.
Scheduling a Repository Backup
You can schedule a periodic automatic backup of the Data Stream Controller (DSC) repository data through
the BAR Setup portlet.
1.
2.
3.
4.
5.
6.
From the Categories list, click Repository Backup.
In the Frequency box, enter a number in Weeks to specify how often the backup job will run.
Select the checkboxes for the days of the week on which the backup will run.
Enter a Start Time for the backup.
Select a Target Group.
Click Apply.
Backing Up the Repository
Prerequisite
You must configure a target group before you can back up the repository.
You can back up the DSC repository immediately or by scheduling the backup.
Note:
Jobs cannot be submitted and DSA configuration settings cannot be changed during a repository backup.
34
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Chapter 2: Teradata BAR Portlets
BAR Setup
1. From the Categories list, click Repository Backup.
2. Click Back Up DSC Now.
A confirmation message appears.
3. Click Continue.
4. [Optional] Click Abort to end the backup.
Restoring the Repository
This task describes how to restore a backup of DSC repository metadata.
Notice:
If you abort a DSC repository restore job while the job is in progress or if the restore job fails, DSC
repository metadata will be corrupted. DSC triggers a command to restore all repository tables to their
initial state, which is an empty table.
1. From the CATEGORIES list, click Repository Backup.
2. Click Restore DSC Now.
During the restore job, the BAR Setup and BAR Operations portlets are unavailable.
3. Select a save set to restore.
4. Click Continue.
[Optional] Click Abort to end the restore operation.
5. [Optional] If the restore job ends with a warning:
a) Check the job status.
In the following example, repo1_restore_job is the job name.
dsc.sh job_status -n repo1_restore_job -B
b) If the status indicates the foreign keys were not restored, run the foreign key repair script at /opt/
teradata/client/<version>/dsa/dsc/recreateFk_<version>.sh.
When the restore job is complete, the BAR Setup and BAR Operations portlets become available.
6. After the repository restore job is complete, perform a tpareset of the DSC repository database.
Alerts
Using the BAR Setup portlet, you can configure custom alerts that are triggered for specific events. Refer to
the following topics to configure alerts:
•
•
•
•
•
Configuring Repository Backup Job Alerts
Configuring Job Status Alerts
Configuring Repository Threshold Alerts
Configuring Media Server Alerts
Configuring System Alerts
In the Alert Setup portlet, you can add alert actions so that the custom alerts send a notification, or take
some other type of action, when a metric exceeds a threshold. After you add alert actions in the Alert Setup
portlet, they appear in the BAR Setup portlet.
The types of alert actions you can choose include the following:
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
35
Chapter 2: Teradata BAR Portlets
BAR Setup
• Send alerts to the Alert Viewer portlet, which provides a consolidated view for all alerts configured at the
enterprise level
• Send alerts through email or text notification
• Run a SQL query
• Notify SNMP system
• Enable configuring of Repository Backup Job Alerts
Configuring Repository Backup Job Alerts
1. In Alert Setup > Alert Presets > Action Sets, create an Action set named configRepository.
Note:
This action set must be named configRepository.
For more information on the Alert Setup portlet or Alert Viewer portlet, see Teradata Viewpoint User
Guide or refer to Alert Setup or Alert Viewer in Teradata Viewpoint help.
Configuring Job Status Alerts
You can configure multiple job status alerts using the BAR Setup portlet.
1. In the BAR Setup portlet, click Alerts.
2. Under Alert Types, click Job Status.
3. In the Alerts list, do one of the following:
a) To add an alert, click
next to Alerts.
b) To configure an existing alert, select the alert in the list.
When disabled,
4.
5.
6.
7.
8.
36
appears next to the alert.
next to the alert.
c) To delete an alert, click
In the Alert Details list:
a) If you are creating an alert, add a name to Alert Name.
b) Check or clear the Enable alert box to enable or disable the alert.
c) From the Severity list, select an alert severity.
In the Alert Rules list:
a) From the Job name list, select an equation such as "is not equal to" and enter the job name.
b) From the Job status is list select a job status such as "paused".
From the Alert Action list:
a) From the Action list, select an action.
For an action to appear in the Action list, you must activate the action using the Alert Setup portlet.
b) Specify a time in the Do not run twice in __ minutes box.
c) In the Message box, type the message that should be sent when the alert criteria that you have
configured are met.
Click Apply.
To reset fields to their default settings, click Reset.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Chapter 2: Teradata BAR Portlets
BAR Setup
Configuring Repository Threshold Alerts
You can configure a repository threshold alert using the BAR Setup portlet.
1. In the BAR Setup portlet, click Alerts.
2. Under Alert Types, click Repository Threshold.
3.
4.
5.
6.
7.
When disabled,
appears next to the alert.
In the Alert Details list:
a) Check or clear the Enable alert box to enable or disable the alert.
b) From the Severity list, select an alert severity.
Under Alert Rules, select a status from the Repository Threshold Status list.
In the Alert Action list:
a) From the Action list, select an action.
For an action to appear in the Action list, you must activate the action using the Alert Setup portlet.
b) Specify a time in the Do not run twice in __ minutes box.
c) In the Message box, type the message that should be sent when the alert criteria that you have
configured are met.
Click Apply.
To reset fields to their default settings, click Reset.
Configuring Media Server Alerts
You can configure a media server alert using the BAR Setup portlet.
1. In the BAR Setup portlet, click Alerts.
2. In the Alert Types list, click Media Server.
3. In the Media Servers list, select a media server.
When disabled,
appears next to the alert.
4. [Optional] Click
to copy alert settings from one or more media servers:
a) In the Copy Alerts Settings box, select All media servers or specific media servers from the Copy
Setting To list.
b) Click OK.
5. In the Alert Details list:
a) Check or clear the Enable alert box to enable or disable the alert.
b) From the Severity list, select an alert severity.
6. In the Alert Rules list, specify media server consumer availability by selecting Available or Unavailable.
7. From the Alert Action list:
a) From the Action list, select an action.
For an action to appear in the Action list, you must activate the action using the Alert Setup portlet.
b) Specify a time in the Do not run twice in __ minutes box.
c) In the Message box, type the message that should be sent when the alert criteria that you have
configured are met.
8. Click Apply.
9. To reset fields to their default settings, click Reset.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
37
Chapter 2: Teradata BAR Portlets
BAR Operations
Configuring System Alerts
You can configure a system alert using the BAR Setup portlet.
1. In the BAR Setup portlet, click Alerts.
2. Under Alert Types, click System.
3. In the Systems list, select a system.
4.
5.
6.
7.
8.
When disabled,
appears next to the system.
The Alert Details list displays details for the system.
In the Alert Details list:
a) Check or clear the Enable alert box to enable or disable the alert.
b) From the Severity list, select an alert severity.
In the Alert Rules list:
a) Select all conditions or any condition from the Alert when matching list.
b) If you want to delete a Condition list, click to the right of the list.
c) If you want to add a Condition list, click to the right of the list.
d) In the Condition list, select a system status and whether or not system consumers are available.
From the Alert Action list:
a) From the Action list, select an action.
For an action to appear in the Action list, you must activate the action using the Alert Setup portlet.
b) Specify a time in the Do not run twice in __ minutes box.
c) In the Message box, type the message that should be sent when the alert criteria that you have
configured are met.
Click Apply.
To reset fields to their default settings, click Reset.
BAR Operations
The BAR Operations portlet allows you to manage the following functions:
• Creating, managing, and submitting jobs
• Viewing job status and history
Job types include backup, restore, and analyze.
About the Saved Jobs View
The Saved Jobs view displays a table of Active, Retired, or Repository jobs, allows you to view the job status
and job actions available for each job, and enables you to create a new job.
Notice:
Repository jobs are only visible to users with BAR administrator privileges.
Show Jobs Menu
38
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Chapter 2: Teradata BAR Portlets
BAR Operations
Filters the Saved Jobs view for Active, Retired, or Repository jobs. A job state of active means the job is
ready to be run for a backup, restore, or analyze. A job state of retired means the job cannot be run. A
repository job is specific to a DSC repository backup, restore, or analyze job.
New Job Button
Creates a backup, restore, or analyze job. Can only be used when the Show Jobs Menu is showing Active
Jobs.
Job Status Filter Bar
Provides a count of the jobs by status and allows you to filter the Job Table. The filter bar is only in use
when the Show Jobs Menu is showing Active Jobs.
Overflow Menu
Shows a list of job statuses. You can select another job status to replace a status on the Job Status Filter
Bar.
Filters
Displays data by showing only rows that match your filter criteria. Click the column headers to sort data
in ascending or descending order.
Saved Jobs Table
Lists the job name, type, status, start time, end time, size and elapsed time of the job.
Table Actions
Configure Columns allows you to select, lock, and order the displayed columns.
Export creates a .csv file containing all available data.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
39
Chapter 2: Teradata BAR Portlets
BAR Operations
About the Job Status Filter Bar
The job status filter bar allows you to filter on a specific job status in the Saved Jobs view.
The job status filter bar buttons provide a count of job runs for each status category. Click on any button to
filter for the selected job status or select a job status from the list. For example, click Complete to display all
jobs that have run to completion.
You can select a job status from the Overflow Menu to replace a job status currently showing on the Job
Status filter bar.
All
All jobs currently saved in the BAR repository
Complete
Jobs which have run to completion
Running
Jobs that are in progress
40
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Chapter 2: Teradata BAR Portlets
BAR Operations
Failed
Jobs which have failed to run to completion
Queued
A job that is waiting for resources to become available before it can begin running
Aborted
A job run that has been stopped by a user prior to completion
Aborting
A job run that is in the process of being stopped by a user prior to completion
Warning
Jobs which run to completion, but received warning messages regarding possible issues during the run
Not Responding
A job that DSC has not received any status for 15 minutes
New
A job that has never been run, or a job in which existing save sets were deleted because of deletion
guidelines in the data retention policy
About Backup Jobs
Backup jobs archive objects from a Teradata source system to a target group. Target groups are defined by a
BAR administrator in the BAR Setup portlet or command-line interface.
The BAR Operations portlet allows you to migrate the object list from an existing ARC script into a backup
job. Objects in that list that exist in the specified source system are automatically selected in the object
browser when a new job is created from the migrated ARC script.
When you run a backup job for the first time or when you change the target group for a backup job, all data
from the specified objects is archived. After this initial full backup, you can choose the backup type:
• Full: Archives all data from the specified objects
• Delta: Archives only the data that has changed since the last backup operation
• Cumulative: Archives the data that has changed since the last full backup was run
Creating a Teradata Backup Job
1. From the Saved Jobs view:
a) Click New Job.
b) Select Teradata as the system type.
c) Select Backup as the job type.
d) [Optional] If you are migrating objects from an existing ARC or TARA script, click Browse by
Migrate ARC script and select the script.
e) Click OK.
2. From the New Backup Job view:
a) Enter a unique Job Name.
b) Select a Source System.
c) In Enter System Credentials, enter a user name and password for the system.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
41
Chapter 2: Teradata BAR Portlets
BAR Operations
Account String information is not required.
d) Click OK.
3. From the New Backup Job view:
a) Select a Target Group.
b) [Optional] Enter a job description.
c) Select objects from the source system in the Objects tab.
4. [Optional] To verify the parent and objects selected, click the Selection Summary tab.
Note:
Size information is not available for DBC only backup jobs. N/A displays as the size value for DBC
only backup jobs.
5. [Optional] To adjust job settings for the job, click the Job Settings tab.
Settings can include specifying whether a job continues or aborts if an access rights violation is
encountered on an object.
6. Click Save.
The newly created backup job is listed in the Saved Jobs view.
7. To run the backup job:
a) Click
next to a job.
b) Select Run.
Related Information
Job Settings
ARC Script Migration
Changing Job Permissions
ARC Script Migration
The Migrate ARC script allows users to import an existing ARC or TARA script into the BAR Operations
portlet. Only the set of objects that define the backup job are migrated into the portlet. Information about
target media, number of streams, and connection parameters will not migrate into the portlet from the ARC
scripts.
ARC script syntax EXCLUDE is supported at object level. EXCLUDE is supported at database level, but a
database range is not allowable for exclusion. If any objects in the script do not exist in the selected source
system, they will not be included in the new job.
About Restore Jobs
Restore jobs are based on successful executions of Teradata backup jobs, and can only be created for a
backup job that has successfully run to completion.
You can define a restore job to always restore the latest version of a backup save set or you can specify a save
set version. A target Teradata system must be selected in order to define the restore job. By default, all
objects from the save set are included in the restore job but the selections can be modified.
Creating a Teradata Restore Job
1.
42
From the Saved Jobs view, do one of the following:
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Chapter 2: Teradata BAR Portlets
BAR Operations
Option
Description
Create a new job
a. Click New Job.
b. Select Teradata as the system type.
c. Select the Restore job type and click OK.
Create a job from a backup job save set
Create a job from migrated job metadata
Note:
Migrated job metadata results when tapes
and metadata information that pointed to
a specific backup job were migrated from
one DSA environment to a different one.
2.
3.
4.
5.
6.
7.
8.
9.
a. Click
for a backup job that has completed.
b. Select Create Restore Job to create a restore job
from the selected save set.
for a migrated job.
a. Click
b. Select Create Restore Job to create a restore job
from the selected migrated job.
Enter a unique Job Name.
If the source set you want to use is not already displayed or you want to change it, click Edit, select
Specify a version, and select the save set to use.
Note:
If the selected job is retired, the Save Set Version information is not selectable.
Select the Destination System and enter the Credentials associated with it.
Select the Target Group.
[Optional] Add a job description.
[Optional] To change the objects selected, clear the checkboxes and select others in the Objects tab.
[Optional] If you have created a backup job on the TD_SERVER_DB database, and the job contains a
SQL-H object, you can map the restore job to a different database:
a) Select the SQL-H object in the Objects tab.
b) Click
next to the SQL-H object.
c) In the Settings box, map the restore job to a different database.
[Optional] To verify the parent and objects selected, click the Selection Summary tab.
Note:
Size information is not available for DBC only backup jobs. N/A displays as the size value for DBC
only backup jobs.
10. [Optional] To adjust job settings for the job, click the Job Settings tab.
Settings can include specifying whether a job continues or aborts if an access rights violation is
encountered on an object.
Note:
The Disable fallback option is not available for restore jobs when Run as copy is not set. The icon
appears when the mouse pointer is hovered over the checkbox.
11. Click Save.
12. To run the newly created restore job, in the Saved Jobs view:
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
43
Chapter 2: Teradata BAR Portlets
BAR Operations
a) Click
next to a job.
b) Select Run.
Related Information
Job Settings
Changing Job Permissions
About Analyze Jobs
An analyze job uses either a read-only or validate analysis method for each job. An analyze read-only job
reads the data from the media device to verify that reads are successful. An analyze validate job sends the
data to the AMPs, where it is interpreted and examined but not restored.
Creating a Teradata Analyze Job
1. From the Saved Jobs view, do one of the following:
Option
Description
Create a new job
a. Click New Job.
b. Select the Analyze job type and click OK.
Create a job from a backup
job save set
a. Click
next to a backup job that has completed.
b. Select Create analyze job to use the selected save set for the analyze
job.
2. From the New Analyze Job view:
a) Enter a unique Job Name.
b) [Optional] Select an analysis method, if not already specified.
Note:
If you change the analysis method to Read and validate, you must provide the Destination
system and its credentials.
c) Select the Job to analyze.
d) Select the Target Group.
e) [Optional] Provide a job description.
3. Specify a save set version from the Save Set Version tab.
Note:
If the selected job is retired, the Save Set Version information is not selectable.
4. [Optional] To adjust job settings for the job, click the Job Settings tab.
5. Click Save.
6. To run the newly created analyze job, in the Saved Jobs view:
a) Click
next to a job.
b) Select Run.
Related Information
Job Settings
44
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Chapter 2: Teradata BAR Portlets
BAR Operations
About the Job Settings Tab
The Job Settings tab allows you to apply changes to the default jobs settings that are created for backup,
restore, and analyze jobs during job creation.
Field
Description
Job Type
Automatically retire
Determines whether a job is retired automatically.
Backup
Restore
Analyze
Never
Default. The job is not retired automatically. The job
retire must be set manually.
After
Specifies the time, in days or weeks, that a job is
automatically retired.
Backup Method
Determines the type of backup to perform.
Backup
Offline
Default. Backs up everything associated with each
specified object while the database is offline. No
updates can be made to the objects during the
backup job run.
Online
Backs up everything associated with each specified
object and initiates an online archive for all objects
being archived. The online archive creates a log that
contains all changes to the objects while the archive
is prepared.
Dictionary Only
Backs up only the dictionary and table header
information for each object.
Note:
You cannot use incremental backup on a
Dictionary Only backup job.
No sync checkbox
Determines where synchronization is done for the job.
Only available for online backup jobs.
Backup
• Default is unchecked. If unchecked, synchronization
occurs across all tables simultaneously. If you try to
run a job that includes objects that are already being
logged, the job aborts.
• If the checkbox is selected, there can be different
synchronization points. If you try to run a job that
includes objects that are already being logged, the job
runs to completion and a warning is returned.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
45
Chapter 2: Teradata BAR Portlets
BAR Operations
Field
Description
Job Type
Logging Level
Determines the types of messages that the database job
logs.
Backup
Restore
Analyze_validate
Error
Default. Enables minimal logging. Provides only
error messages.
Warning
Adds warning messages to error message logging.
Info
Provides informational messages with warning and
error messages to the job log.
Debug
Enables full logging. All messages, including Debug,
are sent to the job log.
Job Permissions
• If the job permissions have not been defined, the
permissions show as not shared.
• If job permissions have been defined, the cumulative
number of users and roles with shared permissions
are shown.
Backup
Restore
Analyze
Click Edit to open the Change Permissions dialog box if
job permissions need to be changed.
Abort On Access
Rights Violation
If the box is not checked, allows the Teradata backup or
restore job to proceed even when a DUMP access rights
violation is encountered on an object.
If the box is checked, the job aborts when the access
rights violation is encountered.
Backup
Restore
Note:
This checkbox appears only when the following are
true::
• The source system is Teradata Database version
15.10.01.00 or later
• Source system and target group credentials have
been validated
• When creating or editing a Teradata backup or
restore job
Query Band
46
Allows tagging of sessions or transactions with a set of
user-defined name-value pairs to identify where a query
originated. These identifiers are in addition to the
current set of session identification fields, such as user
ID, account string, client ID, and application name.
Backup
Restore
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Chapter 2: Teradata BAR Portlets
BAR Operations
Field
Description
Job Type
Note:
Valid query band values are defined on the database.
Note:
DSA creates query bands for restore jobs when an
override temperature or block level compression
option has a value other than DEFAULT. You can
enter different query bands in the bottom text box.
Override temperature Determines the temperature at which data is restored.
Restore
DEFAULT
Default. This data is restored at the default
temperature setting for the system.
HOT
This data is accessed frequently.
WARM
This data is accessed less frequently.
COLD
This data is accessed least frequently.
Block Level
Compression (BLC)
Defines data compression used.
Restore
DEFAULT
Default. Applies same data compression as the
backup job if allowed on the target system.
ON
Compress data at the block level if allowed on the
target system.
OFF
Restore the data blocks uncompressed.
Disable Fallback
checkbox
Fallback protection means that a copy of every table row Restore
is maintained on a different AMP in the configuration.
Fallback-protected tables are always fully accessible and
are automatically recovered by the system.
Note:
The Disable fallback option is not available for restore
jobs when Run as copy is not set. The icon
appears when the mouse pointer is hovered over the
checkbox.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
47
Chapter 2: Teradata BAR Portlets
BAR Operations
Field
Description
Job Type
• Default is unchecked. If unchecked, restored tables
are recreated with fallback automatically enabled.
• Checked is not available.
Run as copy checkbox
Allows the restore to run as a copy job. A copy job
Restore
assumes the destination database is not the original
database system on the backup job and the database id is
different. A copy job is used when restoring to a database
with a different internal database id than the one in the
backup save set. A copy is not used when the database
system is the original database and the database id
matches the one found in the backup save set.
• Default is unchecked.
• If checked, the restore runs as a copy.
DBC Credentials
Click Set Credentials to open the Enter Credentials
dialog box if DBC credentials need to be established.
Restore
About the Objects Tab
The Objects tab displays the object browser. The object browser provides you with the controls to view a list
of objects that are on a source Teradata Database system and archive objects to a target group and restore
these objects to a target system. The object browser simplifies the process of viewing and selecting Teradata
Database objects for backup and restore jobs.
Teradata Database objects display as a hierarchically-organized tree. You can use filtering to limit the
number of objects displayed in the tree. Expand a branch of objects in the tree by clicking next to the
object type.
The following table lists general controls in the object browser.
48
Control
Action
Object Icon
Identifies the database object type. Hovering over the object icon will show the
object type and full object name.
Object Type Filter
Enables you to select the type of object to display.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Chapter 2: Teradata BAR Portlets
BAR Operations
About the Save Set Version Tab
The Save Set Version tab allows you to select a save set version against which you can run your analyze job.
You can select the latest version or you can specify a version if more than one save set exists.
When you specify a version, selecting the Table Actions
select, lock, and designate the order of columns.
menu, and Configure Columns allows you to
The following columns are available:
Column Header
Description
BACKUP DATE
Start date and time job began
OBJECTS
A count of database objects copied during the job
SIZE
Aggregate size for the objects processed
BACKUP TYPE
Full, delta, or cumulative backup associated with the save set
TYPE
Backup
TARGET GROUP
Target group of the backup job
COMPLETION DATE
Date and time the backup job was completed
LOCATION
Location where the objects or 3rd-party target group were backed up
JOB PHASE
Job phase associated with the save set
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
49
Chapter 2: Teradata BAR Portlets
BAR Operations
About the Selection Summary Tab
The Selection Summary tab is a tabular view of the objects explicitly selected in the Select Objects tab. Only
selected objects and object settings are displayed. You can select, lock, and designate the order of columns
from the Table Actions
menu.
The following columns are available:
Column Header
Description
PARENT
Parent object of the selected object
OBJECT
Name of the selected object
TYPE
Object type of the selected object
SIZE
Object size of the selected object
Note:
Size information is not available for DBC only backup jobs. N/A displays as
the size value for DBC only backup jobs.
RENAME
Name to which the selected object will be renamed
REMAP
Database to which the selected object will be remapped
Managing Jobs
This task outlines the tasks you can use to manage BAR operations jobs.
1. Choose the type of job you need to create:
Job Type
Description
Backup
To create a new backup job, refer to:
Creating a Teradata Backup Job
Restore
To create a new restore job or a restore job from a backup job save set, refer to:
Creating a Teradata Restore Job
Analyze
To create a new analyze job or an analyze job from a backup job save set, refer to:
Creating a Teradata Analyze Job
2.
3.
4.
5.
6.
7.
8.
50
Select and define jobs settings.
Change job permissions if anything about the job changed.
Monitor a running job's status and view job phase log updates.
Abort a job from the Saved Jobs view or the Job Status view.
Retire a job from the Saved Jobs view.
Activate a job from the Retired Jobs view.
Delete an active, saved, or retired job.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Chapter 2: Teradata BAR Portlets
BAR Operations
Running a Job
next to a job.
1. Click
2. Select Run.
If you selected a backup job that can only be run as a full backup (for example, a new backup job, or a job
with a new target group), the FULL backup runs automatically.
3. If you did not choose a backup job that can only be run as a full backup, select one of the following and
click Run.
Backup Type Description
Full
Archives all data from the specified objects
Delta
Archives only data that has changed since the last backup
Cumulative
Archives the data that has changed since the last full backup, consolidating multiple
delta or cumulative backups
4. If you are running a repository restore job, a dialog confirms that you want to restore the repository from
the latest repository backup save set. Click OK.
The BAR Setup and BAR Operations portlets are unavailable while the repository restore job is
running.
a) After the repository restore job successfully completes, perform a tpareset on the repository BAR
server from the Linux command prompt.
About Editing Jobs
You can edit Teradata backup, restore, and analyze jobs. The fields that you can edit depend on the type of
job you are editing.
Editing a Teradata Backup Job
You can edit any backup job, whether or not the job has been previously run.
1. From the Saved Jobs view:
a) Click
next to the backup job you want to edit.
b) Select Edit.
2. From the Edit Backup Job view:
a) [Optional for a backup job that has not been run] Change the Source System and Credentials.
Note:
After a backup job has been run successfully and has a save set, the source system for the job
cannot be modified. For jobs that have not been run, changing the system or credentials can result
in a mismatch between the selected objects and the available database hierarchy, which could
cause the job to fail.
b) [Optional] Change the Target Group.
3. [Optional] In the Objects tab, change the objects from the source system.
4. [Optional] To verify the parent and objects selected, click the Selection Summary tab.
Note:
Size information is not available for DBC only backup jobs. N/A displays as the size value for DBC
only backup jobs.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
51
Chapter 2: Teradata BAR Portlets
BAR Operations
5. [Optional] To adjust job settings for the job, click the Job Settings tab.
Settings can include specifying whether a job continues or aborts if an access rights violation is
encountered on an object.
6. Click Save.
Related Information
Job Settings
Changing Job Permissions
Editing a Teradata Restore Job
1. From the Saved Jobs view:
a) Click
next to the restore job you want to edit.
b) Select Edit.
2. In the Edit Restore Job view:
a) [Optional] Change the Destination System and Credentials.
b) [Optional] Change the Target Group.
c) [Optional] Add a job description.
3. [Optional] To change the objects selected, clear the checkboxes and select others in the Objects tab.
4. [Optional] To verify the parent and objects selected, click the Selection Summary tab.
Note:
Size information is not available for DBC only backup jobs. N/A displays as the size value for DBC
only backup jobs.
5. [Optional] To adjust job settings for the job, click the Job Settings tab.
Settings can include specifying whether a job continues or aborts if an access rights violation is
encountered on an object.
Note:
The Disable fallback option is not available for restore jobs when Run as copy is not set. The icon
appears when the mouse pointer is hovered over the checkbox.
6. Click Save.
Related Information
Job Settings
Changing Job Permissions
Editing a Teradata Analyze Job
1. From the Saved Jobs view:
next to the analyze job you want to edit.
a) Click
b) Select Edit.
2. From the Edit Analyze Job view:
a) [Optional] Change the analysis method.
b) [Optional] Change the Destination System and Credentials.
c) [Optional] Change the Target Group.
d) [Optional] Add a job description.
52
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Chapter 2: Teradata BAR Portlets
BAR Operations
3. [Optional] Change the save set version from the Save Set Version tab.
4. [Optional] To adjust job settings for the job, click the Job Settings tab.
5. Click Save.
Related Information
Job Settings
Changing Job Permissions
About Cloning Jobs
A cloned job copies the parameters of an existing job, however, the job requires a different name. Any type
of job can be cloned.
Cloning a Teradata Backup Job
1. From the Saved Jobs view:
next to the backup job you want to clone.
a) Click
b) Select Clone.
2. In the Clone Backup Job view:
a) Enter a unique Job Name.
b) [Optional] Change the Source System and Credentials.
Note:
Changing the system or credentials can result in a mismatch between the selected objects and the
available database hierarchy, which could cause the job to fail.
c) [Optional] Change the Target Group.
d) [Optional] Add a job description.
3. [Optional] To change the objects selected, clear the checkboxes and select others in the Objects tab.
4. [Optional] To verify the parent and objects selected, click the Selection Summary tab.
Note:
Size information is not available for DBC only backup jobs. N/A displays as the size value for DBC
only backup jobs.
5. [Optional] To adjust job settings for the job, click the Job Settings tab.
6. Click Save.
Related Information
Job Settings
Cloning a Teradata Restore Job
1. From the Saved Jobs view:
next to a job.
a) Click
b) Select Clone.
2. From the Clone Restore Job view:
a) Enter a unique Job Name.
b) [Optional] To change the Source Save Set, click Edit, select Specify a version, and select the save set
to use.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
53
Chapter 2: Teradata BAR Portlets
BAR Operations
Note:
If the selected job is retired, the Save Set Version information is not selectable.
c) [Optional] Change the Destination System and the Credentials associated with it.
d) [Optional] Change the Target Group.
e) [Optional] Change the job description.
3. [Optional] To change the objects selected, clear the checkboxes and select others in the Objects tab.
4. [Optional] To verify the parent and objects selected, click the Selection Summary tab.
Note:
Size information is not available for DBC only backup jobs. N/A displays as the size value for DBC
only backup jobs.
5. [Optional] To adjust job settings for the job, click the Job Settings tab.
Note:
The Disable fallback option is not available for restore jobs when Run as copy is not set. The icon
appears when the mouse pointer is hovered over the checkbox.
6. Click Save.
7. To run the cloned restore job, in the Saved Jobs view:
a) Click
next to a job.
b) Select Run.
Related Information
Job Settings
Cloning a Teradata Analyze Job
1. From the Saved Jobs view:
2.
3.
4.
5.
6.
7.
8.
9.
a) Click
next to a job.
b) Select Clone.
Enter a unique Job Name.
[Optional] Change the Analysis Method.
[Optional] Change the Target Group.
[Optional] Change the job description.
[Optional] From the Save Set Version tab, change the save set version
[Optional] To adjust job settings for the job, click the Job Settings tab.
Click Save.
To run the cloned analyze job, in the Saved Jobs view:
a) Click
next to a job.
b) Select Run.
Related Information
Job Settings
Retiring a Job
54
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Chapter 2: Teradata BAR Portlets
BAR Operations
You can retire a job from the Saved Jobs view, if the job is not in Running,New, Aborting,Not Responding,
or Queued status.
When you retire a job, the job moves from the Active Jobs view to the Retired Jobs view.
Note:
A retired job will be automatically deleted if this setting is configured through the BAR Setup portlet or
DSA command-line interface. A warning message will appear before the job is retired reporting the
deletion date.
1. Click
next to a job.
2. Select Retire.
3. Click OK to confirm the job retirement.
Activating a Job
You can activate a job from the Retired Jobs view.
When you activate a job, the job is moved from the Retired Jobs view to the Active Jobs view.
next to a job.
1. Click
2. Select Activate.
3. Click Yes to confirm the job activation.
Deleting a Job
You can immediately delete a job from the Retired Jobs view. You can also delete a job from the Saved Jobs
view, if the job has a status of New.
1. Click
next to a job.
2. Select Delete.
Note:
If you are attempting to delete a backup job with dependent restore or analyze jobs, a message
displays with the dependent job names that must be deleted before you can delete the backup job.
3. Click Yes to confirm the job deletion.
The job and job history will be deleted immediately and cannot be restored.
Aborting a Job
You can abort a running job from either the Saved Jobs or the Job Status view.
1. Choose the view to select the job you need to abort:
View
Saved Job
Job Status
Description
a. Click
next to a job.
b. Select Abort.
a. Click Abort.
2. Click OK to confirm you want to abort the job run.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
55
Chapter 2: Teradata BAR Portlets
BAR Operations
Changing Job Permissions
When you create a job, you can set permissions that allow some users or roles to run the job and some users
or roles to edit the job. After a job is created, you can change permissions for users or roles.
To designate job permissions, you must be the owner of the job or the DSA administrator.
1. From the Saved Jobs view, create or edit a job.
2. Click the Job Settings tab and then click Edit.
3. Select users and roles to grant access.
Option Description
Users
Roles
a. In the Available Users box, select one or more users and click
Users box.
b. Select a user and grant access to Run or Edit.
to move it to the Selected
a. In the Available Roles box, select one or more roles and click
Roles box.
b. Select a role and grant access to Run or Edit.
to move it to the Selected
4. Click OK.
Viewing Job Status
Depending on the type of the job and the run status, you can view the details from the Saved Jobs or Job
History view.
1. Do one of the following, depending on the run status of the job:
View Option
Description
Status of running job
or the most recent job
a. Click the Saved Jobs tab.
b. Click
next to a job.
c. Select Job status.
If the job is currently running, you will see the Streams tab and a progress bar
indicating the percentage of the job completed. For running or completed
jobs, the Log tab displays details about the objects included in the job.
Status logs of
previously run jobs
a. Click the Job History tab.
b. Click the row for the job.
The Log tab displays details about the objects included in the job.
2. [Optional] To view phase details, click View Phase Log.
• The dictionary and data phase details are available for backup and analyze_validate jobs.
• The dictionary, data, build, and postscript phase details are available for restore jobs.
• Click OK to return to the Job Status screen.
3. [Optional] To view job history, including start and end times, duration, and objects included, click View
History.
4. [Optional] To view save set details for backup jobs, including backup date, objects, size, and backup type,
click View Save Sets.
56
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Chapter 2: Teradata BAR Portlets
BAR Operations
5. [Optional] To view error log details for failed analyze read jobs, including error code, error message,
warning code, and warning message, click View Error Log.
About the Log Tab
The Log tab displays details about database objects for running and completed backup, restore, and
analyze_validate jobs. The tab is available when viewing a job status.
You can filter the results in the Log tab by column. For example, to display only object names beginning
with "order", type "order" in the Object Name box and press Enter.
You can select, lock, and designate the order of columns from the Table Actions
menu.
Field
Description
Job Type
Start Time
Start date and time job began
Analyze_Read
End Time
Date and time job ended
Backup
Restore
Analyze_Read
Analyze_Validate
File Name
The backup files that contain the save set
Analyze_Read
Object Name
Name of the object being backed up, restored, or validated
Backup
Restore
Analyze_Validate
Object Type
Type of object being backed up, restored, or validated
Backup
Restore
Analyze_Validate
Phase
The job phase can be dictionary, data, build, or postscript
Backup
Restore
Analyze_Read
Analyze_Validate
Status
The job status of the object
Backup
Restore
Analyze_Read
Analyze_Validate
Parent Name
Specifies the name of the parent of the object being backed up,
restored, or validated
Backup
Restore
Analyze_Validate
Byte Count
Total number of bytes copied
Backup
Restore
Analyze_Read
Analyze_Validate
Row Count
Total number of rows copied
Backup
Restore
Error Code
Specifies the error code encountered
Backup
Restore
Analyze_Read
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
57
Chapter 2: Teradata BAR Portlets
BAR Operations
Field
Description
Job Type
Analyze_Validate
Warning Code
Specifies the warning code encountered
Backup
Restore
Analyze_Read
Analyze_Validate
About the Streams Tab
The Streams tab displays details about the job streams during a backup, restore, or analyze job. The tab is
available when viewing the job status of a running job. You can select, lock, and designate the order of
columns from the Table Actions
menu.
Field
Description
Node
Specifies the node where the job stream is running
Stream
Numerically identifies a job stream
Object
Name of object being backed up, restored, or analyzed
Average Stream Rate
(Data phase)
For a backup job, the number of bytes reported by DS Main since the stream
started.
For a restore and analyze_validate job, the number of bytes reported by the DSA
Network Client (ClientHandler) since the stream started.
About the Phase Log
The Phase Log displays details about database objects in running and completed backup, restore, and
analyze jobs. This information is read-only. The Phase Log is available when viewing the status of a job or a
repository job.
58
Field
Description
Job Type
Job Phase
The job phase to which the information pertains. Backup jobs
have two phases: Dictionary and Data. In addition to
Dictionary and Data, restore jobs have Build and Postscript
phases.
Backup
Restore
Analyze_Validate
Objects
The number of objects processed during the phase
Backup
Restore
Analyze_Validate
Start
Start date and time the phase began
Backup
Restore
Analyze_Validate
End
Date and time the phase ended
Backup
Restore
Analyze_Validate
Average speed
(Data phase)
• For backup jobs, average speed = sum of bytes reported by Backup
DSMain for all objects / Time interval from time first byte of Restore
Analyze_Validate
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Chapter 2: Teradata BAR Portlets
BAR Operations
Field
Description
Job Type
data received from DSMain and last object backed up, plus
the refresh rate (which is 30 seconds by default). The
average backup rate includes tape mount, positioning, and
close time.
• For restore jobs, average speed = sum of bytes reported by
DSMain for all objects / Time interval from first receipt of
data for first object from the DSA Network Client through
data transfer for last object, plus the refresh rate (which is 30
seconds by default). The average restore rate includes tape
mount, positioning, and close time, and the time for the
concurrent table index build process while the data is being
restored. The time for any remaining table index builds after
the restore data transfer of the last object is completed is not
included.
Size (Data phase)
Size of the data processed during the phase duration
Backup
Restore
Analyze_Validate
Viewing Save Sets
You can view all of the save sets associated with a given backup job.
1. Do one of the following, depending on the tab you are currently viewing:
View Option
Description
From the Saved Jobs tab
• Click
next to a job.
• Select Job status.
From the Job History tab
• Click the row of the job
2. Click View Save Sets.
The Save Sets view lists the save sets for the selected job. You can select, lock, and designate the order of
columns from the Table Actions
menu.
Column Header
Description
BACKUP DATE
Date and time the backup job started
OBJECTS
Number of objects processed
SIZE
Aggregate size of the objects processed
BACKUP TYPE
Full, delta, or cumulative backup
TYPE
Job type associated with the save set
TARGET GROUP
Target group associated with the save set
COMPLETION DATE
Date and time the backup job finished
LOCATION
Location where the objects or 3rd-party target group were backed up
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
59
Chapter 2: Teradata BAR Portlets
BAR Operations
Column Header
Description
JOB PHASE
Job phase associated with the save set
Viewing Backup IDs
You can view the backup IDs for a given job name and save set.
1. Do one of the following, depending on the run status of the job:
View Option
Description
Status of running job or the most recent job
• Click the Saved Jobs tab.
• Click
next to a job.
• Select Job status.
Status logs of previously run jobs
• Click Job History tab.
• Click the row for the job.
2. Click View Save Sets.
3. Click
next to a save set and select Backup IDs.
The BACKUP IDS for the save set are listed. You can select, lock, and designate the order of columns from
the Table Actions
menu.
Column Header
Description
BACKUP ID
Backup ID for the given job name and save set.
FILE NAME
File name of the file associated with the backup ID.
FILE SIZE
File size of the file associated with the backup ID
DATE
Date and timestamp for the file created.
Note:
Currently in the portlet, you can only view backup IDs for a NetBackup job. To query and display save
sets generated by a disk file system or DD Boost, use the query_backupids and
list_query_backupids commands.
About the Job History View
The Job History view displays a table of BAR jobs that have been run, and allows you to view the details of
the last job run.
Filters
Displays data by showing only rows that match your filter criteria. Click the column headers to sort data
in ascending or descending order.
Job Table
Lists the job name, type, status, start time, end time, size and duration of the job.
60
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Chapter 2: Teradata BAR Portlets
BAR Operations
Table Actions
Configure Columns allows you to select, lock, and order the displayed columns.
Export creates a .csv file containing all available data.
Viewing Job History
The Job History tab of the BAR Operations portlet displays a list of all job executions. You can view more
detailed information about a single job execution from either the Job History or Saved Jobs view.
1. Click the job row to display job status.
2. Click View History.
The Job History for the job appears. You can select, lock, and designate the order of columns from the
Table Actions
menu.
Column Header
Description
START
Start date and time job began
END
Date and time job ended
DURATION
Total time the job ran
STATUS
The job status of the job run
OBJECTS
A count of database objects copied during the job
SOURCE
Source system (backup) of the job.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
61
Chapter 2: Teradata BAR Portlets
BAR Operations
Column Header
Description
DESTINATION
For a backup job, the target group to which the data is backed up. For a restore
job, the Teradata Database system to which the data is restored.
Viewing Save Sets
You can view all of the save sets associated with a given backup job.
1. Do one of the following, depending on the tab you are currently viewing:
View Option
Description
From the Saved Jobs tab
• Click
next to a job.
• Select Job status.
From the Job History tab
• Click the row of the job
2. Click View Save Sets.
The Save Sets view lists the save sets for the selected job. You can select, lock, and designate the order of
columns from the Table Actions
menu.
Column Header
Description
BACKUP DATE
Date and time the backup job started
OBJECTS
Number of objects processed
SIZE
Aggregate size of the objects processed
BACKUP TYPE
Full, delta, or cumulative backup
TYPE
Job type associated with the save set
TARGET GROUP
Target group associated with the save set
COMPLETION DATE
Date and time the backup job finished
LOCATION
Location where the objects or 3rd-party target group were backed up
JOB PHASE
Job phase associated with the save set
About the Phase Log
The Phase Log displays details about database objects in running and completed backup, restore, and
analyze jobs. This information is read-only. The Phase Log is available when viewing the status of a job or a
repository job.
62
Field
Description
Job Type
Job Phase
The job phase to which the information pertains. Backup jobs
have two phases: Dictionary and Data. In addition to
Dictionary and Data, restore jobs have Build and Postscript
phases.
Backup
Restore
Analyze_Validate
Objects
The number of objects processed during the phase
Backup
Restore
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Chapter 2: Teradata BAR Portlets
BAR Operations
Field
Description
Job Type
Analyze_Validate
Start
Start date and time the phase began
Backup
Restore
Analyze_Validate
End
Date and time the phase ended
Backup
Restore
Analyze_Validate
Average speed
(Data phase)
• For backup jobs, average speed = sum of bytes reported by Backup
DSMain for all objects / Time interval from time first byte of Restore
data received from DSMain and last object backed up, plus Analyze_Validate
the refresh rate (which is 30 seconds by default). The
average backup rate includes tape mount, positioning, and
close time.
• For restore jobs, average speed = sum of bytes reported by
DSMain for all objects / Time interval from first receipt of
data for first object from the DSA Network Client through
data transfer for last object, plus the refresh rate (which is 30
seconds by default). The average restore rate includes tape
mount, positioning, and close time, and the time for the
concurrent table index build process while the data is being
restored. The time for any remaining table index builds after
the restore data transfer of the last object is completed is not
included.
Size (Data phase)
Size of the data processed during the phase duration
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Backup
Restore
Analyze_Validate
63
Chapter 2: Teradata BAR Portlets
BAR Operations
64
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
CHAPTER 3
Teradata DSA Command-Line Interface
Command-Line Interface Overview
DSA provides a command-line interface that enables you to carry out the same actions that can be carried
out in the BAR Setup and BAR Operations portlets, plus additional commands that are not available in the
portlets. The DSA command-line interface includes setup commands for configuring, updating, and deleting
targets, sources, and the DSC repository. The command-line interface also includes operations commands
for creating, running, and monitoring jobs.
You can run the commands on an ad-hoc basis, as an alternative to using the portlets. You can switch
between the command-line and portlet interfaces. For example, you can create jobs in the portlet and then
view them using the list_jobs command. Or you can export the XML file associated with a job you
created in the portlet, and update the job definition using the command line. You can also develop scripts to
automate DSA commands and to use with UNIX cron or other job-scheduling applications. For example,
you might want to have a backup job run automatically every night at 1 a.m.
Each command has a number of parameters that can be specified directly in the command line to run the
command. In addition, many of the commands require that additional information necessary to carry out
the commands be specified in an XML file. Sample XML files for system and component configuration and
for job definition are located at $BARCMDLINE_ROOT/samples. The sample files include helpful comments
and show the available settings for the commands to which they correspond.
Accessing the DSA Command-Line Interface
After the DSA command-line interface package is successfully installed, the command line can be accessed
from the Linux console. You can run DSA commands from any file system location without navigating to
the root directory.
Note:
If you are non-root, you must be in the users group to run the DSA command-line interface.
Notice:
Enabling General > Security Management in the BAR Setup portlet increases security for the DSA
environment. This setting requires users to provide Viewpoint credentials to execute some commands
from the DSA command-line interface.
1. Type dsc from any directory and press Enter.
Accessing DSA Command Help
You can view a list and brief description of all of the commands available in the DSA command-line
interface. The basic syntax and usage of commands is shown for each command.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
65
Chapter 3: Teradata DSA Command-Line Interface
Command-Line Interface Overview
You can also specify a command name to view more detailed information about the command, including a
syntax example and a list and description of each of the parameters associated with the command.
1. Type dsc help in the command line, and do one of the following:
Option
Description
View all commands
• Press Enter to view a list of all of the command names.
View a specific command • Add a specific command name to view information about a particular
command.
• Press Enter.
For example, for information about the create_job command, type dsc help create_job and press
Enter. The following is displayed:
NAME:
create_job - Create Job Command
DESCRIPTION:
Creates a DSA job based on the file, with user modifications from parameters
below.
User will be required to authenticate username and password to the source/
target system on the console.
EXAMPLE:
create_job -n|-name job1 -f|-file parameters.xml
PARAMETERS:
Parameter
Example
Description
n|name
job1
(Optional) Name for the job, must be
unique
d|description
backup web apps (Optional) Description of the job. For
multi-word surround by \\" must be escaped with backslash.
t|type
restore
(Optional) Type of the job:
[BACKUP,RESTORE,ANALYZE_READ,ANALYZE_VALIDATE]
b|backup_name
backupWeb1
(Optional) Backup job name(only for
RESTORE or ANALYZE jobs)
v|backup_version
60
(Optional) Backup version number(only
for RESTORE or ANALYZE jobs). Type LATEST or 0 for latest save set.
f|file
parameters.xml
XML File to upload as basis for new job.
u|user_authentication user
(Required when security management is
enabled) Supplies command with Viewpoint user
DSA Command Types
DSA commands can be categorized as the following two major types:
• DSA setup commands for administration and configuration that allow you to create, update, and delete
targets, sources, and the DSC repository. This functionality corresponds to that provided by the BAR
Setup portlet.
• DSA operation commands for management and reporting that allow you to create, execute, monitor,
update, and delete jobs. This functionality corresponds to that provided by the BAR Operations portlet.
66
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Chapter 3: Teradata DSA Command-Line Interface
Command-Line Interface Overview
The following table lists and briefly describes the administration and configuration commands:
Table 1: DSA Configuration Commands
Command Name
Description
config_aws
The config_aws command configures the Amazon S3
Server in the DSA repository based on parameter files.
config_azure
The config_azure command configures the Microsoft
Azure Server in the DSA repository based on parameter
files.
config_dd_boost
The config_dd_boost command configures the DD
Boost server for the DSA repository.
config_disk_file_system
The config_disk_file_system command configures
the disk file system in the DSA repository.
config_general
The config_general command configures the general
settings, based on the information contained in the
parameters XML file.
config_media_servers
The config_media_servers command configures the
BAR media servers.
config_nbu
The config_nbu command configures a DSA system to
use Symantec NetBackup third-party software to back up
and restore data.
config_repository_backup
The config_repository_backup command provides
the configuration information to back up the DSC
repository.
config_systems
The config_systems command configures the DSC
settings for the Teradata system and nodes used for backup
and restore jobs. The command also sets the selector in the
targeted system for ActiveMQ.
config_target_groups
The config_target_groups command configures the
target groups based on the target type and the information
from the parameters file.
config_target_group_map
The config_target_group_map command configures
the map between target groups when restoring to a different
client configuration.
delete_component
The delete_component command deletes an existing
component based on the information in the parameters.
delete_target_group_map
The delete_target_group_map command deletes a
target group map for restoring to a different client
configuration.
disable_component
The disable_component command disables an existing
BAR component based on the component name and type.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
67
Chapter 3: Teradata DSA Command-Line Interface
Command-Line Interface Overview
Command Name
Description
enable_component
The enable_component command enables an existing
BAR component based on the component name and type.
export_config
The export_config command exports the current XML
definition for the requested BAR component.
export_job_metadata
The export_job_metadata command exports metadata
of a job (job definition, save sets, and targets) based on the
requested backup version. In the case of a disaster to the
DSC repository, exporting and then importing job metadata
enables job migration and restoration to a different DSA
environment.
export_repository_backup_config
The export_repository_backup_config command
exports all configurations associated with setting up a
repository backup job. This includes the system, NetBackup,
media servers, and target group associated with the target
selected in config_repository_backup.
export_target_group_map
The export_target_group_map command exports a
map between target groups for restoring to a different client
configuration.
import_repository_backup_config
The import_repository_backup_config command
imports all configurations associated with setting up a
repository backup job. This includes system, NetBackup,
media servers, and target group configurations. This
command is used to recover the DSC backup repository
after a disaster.
list_access_module
The list_access_module command lists available access
module types for a named media server.
list_components
The list_components command lists components
defined and stored in the DSC repository. If a specific
component is requested, that component definition is
displayed. Otherwise, a list of the components matching any
provided filters is displayed. Any partial component name
returns all components matching the partial input.
Note:
The Type parameter is required.
68
list_general_settings
The list_general_settings command lists all current
general settings.
list_repository_backup_settings
The list_repository_backup_settings command
lists all current repository backup settings.
list_system_status
The list_system_status command lists the status of a
system.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Chapter 3: Teradata DSA Command-Line Interface
Command-Line Interface Overview
Command Name
Description
list_target_group_map
The list_target_group_map command lists the maps
between target groups for restoring to a different client
configuration.
The following table lists and briefly describes the management and reporting commands:
Table 2: DSA Operating Commands
Command Name
Description
abort_job
The abort_job command aborts an actively running job, a job in the
queue, or a job that is not responding.
activate_job
The activate_job command activates a retired job, so that it is
available.
consolidate_job_logs
The consolidate_job_logs command uploads all logs for a
completed job to a centralized location. If the job is running, the
command is rejected.
create_job
The create_job command creates a job based on the values you
specify for parameters in the command line or in the XML file.
Parameter values you enter in the command line supersede any value
you enter for those parameters in the parameters XML file.
delete_job
The delete_job command deletes a job and any data associated with
it from the DSC repository. Any logs and job history are deleted and
cannot be restored. Any backup save sets created for the job that exist
on devices managed by third-party solutions must be deleted manually
using the interface for that solution.
Note:
This command only deletes new or retired jobs.
export_job
The export_job command exports the current XML definition for
the requested job.
export_job_metadata
The export_job_metadata command exports metadata of a job
(job definition, save sets, and targets) based on the requested backup
version. In the case of a disaster to the DSC repository, exporting and
then importing job metadata enables job migration and restoration to
a different DSA environment.
import_job_metadata
The import_job_metadata command imports metadata of a job
(job definition, save sets, and targets) to the specified directory. In the
case of a disaster to the DSC repository, exporting and then importing
job metadata enables job migration and restoration to a different DSA
environment.
job_status
The job_status command gets the latest status for a job with the
given name and displays it on the screen. If the job is running, a
detailed status message is displayed. If the job is not running, the status
of the last run for that job is displayed.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
69
Chapter 3: Teradata DSA Command-Line Interface
Command-Line Interface Overview
Command Name
Description
job_status_log
The job_status_log command displays the latest status log for a
job with the given name if the job is running. If the job is not running,
the status log for the last run job is displayed.
list_consumers
The list_consumers command sends a request to ActiveMQ to
provide information about all of the consumers of DSA Network
Client and DSMain processes. It checks whether the selector values
match the names of the DSA Network Client and DSMain systems and
that the processes are running.
list_jobs
The list_jobs command lists jobs defined and stored in the DSC
repository. If a specific job is requested, that job definition is displayed.
Otherwise, a list of job names matching any provided filters is
displayed. If no parameters are provided, a list of jobs (excluding
backup repository jobs) is displayed.
list_query_backupids
The list_query_backupids command lists the results of the query
returned from the query_backupids command.
list_query_nbu_backupids
The list_query_nbu_backupids command lists the results of the
query returned from the query_nbu_backupids command.
list_recover_backup_metad The list_recover_backup_metadata command lists the overall
ata
status and individual media server status of the
recover_backup_metadata command.
list_save_sets
The list_save_sets command lists all valid save sets for a given
job name.
list_validate_job_metadat The list_validate_job_metadata command lists the
a
information returned from a successful validate_job_metadata
command.
70
object_release
The object_release command releases all objects that are currently
locked by a job. It does not release objects for new, running, or queued
jobs.
purge_jobs
The purge_jobs command can be used for cleaning up DSC
repository when resources are not released after aborting jobs. The
command will abort any jobs and purge the resources used by any
incomplete jobs. When no job name is provided, it purges resources
used by all jobs in BAR and BARBACKUP.
query_backupids
The query_backupids command queries third-party software for
information needed for duplication.
query_nbu_backupids
The query_nbu_backupids command queries NetBackup for
information needed for a NetBackup duplicate.
recover_backup_metadata
The recover_backup_metadata command queries the third party
media to recover backup metadata and rebuild the backup job plan in
the case of a disaster to the DSC repository. The command can only
run on repository backup jobs with no save sets.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Chapter 3: Teradata DSA Command-Line Interface
Command-Line Interface Overview
Command Name
Description
retire_job
The retire_job command retires an active job. It does not retire a
running or queued status job, or one that is already in the retired state.
run_job
The run_job command runs a job as soon as all necessary resources
are available. The DSC system limit is set at 20 concurrent running
jobs, and up to 20 jobs can queue above that limit. The DSC also
queues jobs if the defined target media is not available before the job
starts.
run_repository_job
The run_repository_job command runs a job in the DSC
repository.
set_status_rate
The set_status_rate command configures the status update rate
between DSC and the media servers or Teradata systems.
system_health
The system_health command lists ActiveMQ system health
information such as the memory limit and memory usage for the main
DSA queues.
sync_save_sets
The sync_save_sets command sends a sync request to all
NetBackup clients that have save sets older than the
dataset.retention.days value configured in dsc.properties. If the
save sets are expired on the NetBackup side, DSC deletes them from
the DSC repository. No scheduled or ad hoc jobs can run until the
deletion completes.
update_job
The update_job command updates an existing DSA job based on the
information from the command line parameters or the XML file if
provided. Parameter values specified in the command line supersede
values entered for the same parameters in the XML file.
validate_job_metadata
The validate_job_metadata command queries NetBackup for
information needed to validate the save set.
DSA Configuration
DSA configuration commands enable you to set up the DSC repository and system components. These
commands can be used as an alternative to using the BAR Setup portlet. DSA configuration commands
enable you to perform a wide range of configuration activities, including the following:
•
•
•
•
•
Setting general DSC repository settings
Backing up the DSC repository, and exporting and importing repository backup information
Configuring systems, nodes, media servers, third-party applications, and target groups
Enabling and disabling systems and target groups
Viewing DSC repository and component information
The procedures for using the DSA configuration commands are described in detail in the following sections.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
71
Chapter 3: Teradata DSA Command-Line Interface
Command-Line Interface Overview
Systems and Nodes
You can add, configure, and set stream limits for systems and nodes in the BAR Setup portlet and by using
DSA setup commands from the DSA command-line interface. After you enable configured systems, they are
available for backup and restore jobs in the BAR Operations portlet and for DSA operation commands.
Notice:
All nodes, including channel and standby nodes, must be included in system configuration.
Configuring a System and Node
Prerequisite
Before using this command, you must prepare an XML file that contains the necessary system and node
configuration information, including the system name, node IP addresses, and limits on the number of
streams per node.
Note:
Setting the stream limits to 0 for channel and standby nodes is not required.
The config_systems command configures the DSC settings for the Teradata system and nodes used for
backup and restore jobs. The command also sets the selector in the targeted system for ActiveMQ.
Even though the repository backup system is already pre-configured, you must rerun config_systems to
set up the system selector and AMP discovery before the system can be enabled for use.
1. Type dsc config_systems followed by the parameters, and press Enter.
For more information, see config_systems
2. Type the source user name and password, and press Enter.
Media Servers
Media servers manage data during system backups and restores. Media servers are made available to your
BAR environment as soon as the DSA software is installed and running. DSA administrators can then add or
delete media servers to their BAR configuration, and assign media servers to target group configurations, in
the BAR Setup portlet or through the command-line interface by using DSA setup commands
Adding or Updating a Media Server
The config_media_servers command configures the BAR media servers.
To use the config_media_servers command, you must specify the XML file that contains the necessary
configuration information, including the media server name, third-party client, port and ip address.
1. Type dsc config_media_servers -f followed by the full file path, and press Enter.
Backup Solutions
Backup solutions are third-party software options that transfer data between a storage device and a database
system. You can configure the third-party server software to:
• Indicate the media server on which the third-party backup software is located
• Customize setup options for each server
72
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Chapter 3: Teradata DSA Command-Line Interface
Command-Line Interface Overview
Adding or Updating a Disk File System
The config_disk_file_system command configures the disk file system in the DSA repository.
To use the config_disk_file_system command, you must specify the XML file that contains the
necessary configuration information, including the file system path and limit for the maximum number of
open files.
When adding or updating a file system using the command line, first export the existing disk file
configuration, which includes all the current file systems, to the configuration xml file. After exporting the
configuration, add a new or update an existing file system and its parameters within the exported
configuration xml file. After saving the configuration xml file, run the config_disk_file_system command to
configure all the file systems using this configuration xml file.
1. Type dsc config_disk_file_system -f followed by the full file path, and press Enter.
Adding or Updating a DD Boost Server
The config_dd_boost command configures the DD Boost server for the DSA repository.
To use the config_dd_boost command, you must specify the XML file that contains the necessary
configuration information, including the server name, ip address, media type, and storage unit names and
limit for the maximum number of open files.
1. Type dsc config_dd_boost -f followed by the full file path, and press Enter.
Adding or Updating a NetBackup Server
The config_nbu command configures a DSA system to use Symantec NetBackup third-party software to
back up and restore data.
To use the config_nbu command, you must specify the XML file that contains the necessary configuration
information, including the NetBackup server name and ip address, and policy class information. The
policies you define must match those defined in NetBackup.
1. Type dsc config_nbu -f followed by the full file path, and press Enter.
Target Groups
Target groups are composed of media servers and devices used for storing backup data. DSA administrators
create target groups, and assign media servers and devices. Target groups are then accessible to BAR backup
jobs.
After a backup job has run to completion, you can create a BAR restore job to restore data using the same
target group as the backup job. You can also create a target group map, which allows a BAR restore job to
restore data from a different target group.
Adding or Updating a Target Group
Prerequisite
Configure a third-party application and media server before adding a target group.
The config_target_groups command configures the target groups based on the target type and the
information from the parameters file.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
73
Chapter 3: Teradata DSA Command-Line Interface
Command-Line Interface Overview
To use the config_target_groups command, you must specify the XML file that contains the necessary
configuration information, including the target group name, entity, BAR media server, and policy class
information.
Note:
Without installing Data Stream Extensions (DSE), you get access only to the default disk file system
target group.
1. Type dsc config_target_groups followed by the parameters, and press Enter.
For more information, see config_target_groups.
Enabling a System or Target Group
Prerequisite
A system or target group must be configured in your BAR environment before it can be enabled. Use the
BAR Setup portlet or a configuration file to configure the component.
The enable_component command enables an existing BAR component based on the component name
and type.
1. Type dsc enable_component followed by the parameters, and press Enter.
For more information, see enable_component.
Disabling a System or Target Group
Prerequisite
The system or target group cannot be in use when you plan to disable it.
The disable_component command disables an existing BAR system or target group.
1. Type dsc disable_component followed by the parameters, and press Enter.
For more information, see disable_component.
Adding a Target Group Map
Prerequisite
Configure a target group before adding a target group map.
The target group mapping is maintained in an XML file to designate the new configuration for the restore.
The config_target_group_map command configures the map between target groups when restoring to
a different client configuration.
1. Open the XML file containing the target mapping configuration information.
A sample map, sample_target_map.xml, is supplied in the DSC sample library.
2. Specify target group map tags:
Target Group Map Tag
Description
master_source_target_name The backup target group name.
74
master_source_dsc_id
The ID of the source DSC repository.
master_dest_target_name
The restore target group name.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Chapter 3: Teradata DSA Command-Line Interface
Command-Line Interface Overview
Target Group Map Tag
Description
target_group_maps
A sub target grouping consisting of a backup and restore
configuration pair.
source_mediaserver_name
The backup media server.
source_policy_class_name
The third party backup application policy associated with the
backup media server.
dest_mediaserver_name
The restore media server.
dest_policy_class_name
The third party backup application policy associated with the
restore media server.
3. Type dsc config_target_group_map followed by the parameters, and press Enter.
For more information, see config_target_group_map.
Exporting a Target Group Map
The export_target_group_map command exports a map between target groups for restoring to a
different client configuration.
1. Type dsc export_target_group_map followed by the parameters, and press Enter.
For more information, see export_target_group_map.
Deleting a Target Group Map
The delete_target_group_map command deletes a target group map for restoring to a different client
configuration.
1. Type dsc delete_target_group_map followed by the parameters, and press Enter.
For more information, see delete_target_group_map.
Deleting a Component
The delete_component command deletes an existing component based on the information in the
parameters.
To use the delete_component command, you must specify the component name and type, as well as the
system if you are deleting a node. You do not need to specify the component name for disk_file_system
types.
You can delete a system, node, media server, NetBackup server, disk file system, or a target group, except
under the following conditions:
•
•
•
•
•
•
•
•
The system marked for repository backup
A system in use by a job
A media server in use by a target group
A NetBackup server in use by a target group
A target group in use by a job
A target group in use by a target group map
A target group in use for repository backups
A policy used by a target group
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
75
Chapter 3: Teradata DSA Command-Line Interface
Command-Line Interface Overview
1. Type dsc delete_component followed by the parameters, and press Enter.
For more information, see delete_component.
Viewing Configuration Information
The list commands allow you to view component configurations, DSA general settings, active jobs, retired
jobs, DSC jobs, job history, repository backup settings, valid save sets, and target group maps.
1. Type the list command followed by the parameters, if applicable, and press Enter.
List Command
Description
list_access_module
The list_access_module command lists available
access module types for a named media server.
For more information, see list_access_module.
list_components
The list_components command lists components
defined and stored in the DSC repository. If a specific
component is requested, that component definition is
displayed. Otherwise, a list of the components matching
any provided filters is displayed. Any partial component
name returns all components matching the partial input.
Note:
The Type parameter is required.
For more information, see list_components.
list_general_settings
The list_general_settings command lists all current
general settings.
There are no parameters associated with this command.
list_jobs
The list_jobs command lists jobs defined and stored in
the DSC repository. If a specific job is requested, that job
definition is displayed. Otherwise, a list of job names
matching any provided filters is displayed. If no parameters
are provided, a list of jobs (excluding backup repository
jobs) is displayed.
For more information, see list_jobs.
list_job_history
The list_job_history command lists the complete
history of the job you specify, including the type of system
used during the job execution. If you do not specify a job,
the history of all jobs in the DSC repository are listed.
For more information, see list_job_history.
list_repository_backup_settings The list_repository_backup_settings command
lists all current repository backup settings.
There are no parameters associated with this command.
76
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Chapter 3: Teradata DSA Command-Line Interface
Command-Line Interface Overview
List Command
Description
list_save_sets
The list_save_sets command lists all valid save sets for
a given job name.
For more information, see list_save_sets.
list_target_group_map
The list_target_group_map command lists the maps
between target groups for restoring to a different client
configuration.
For more information, see list_target_group_map.
Exporting DSA Component Configuration
The export_config command exports the current XML definition for the requested BAR component.
1. Type dsc export_config followed by the parameters, and press Enter.
For more information, see export_config.
Managing the DSC Repository
DSA configuration settings and job metadata are stored in the Data Stream Controller (DSC) Repository.
You can automate a repository backup or initiate the backup manually. A repository backup job backs up
your DSC metadata to a target group. Any running DSC repository job (backup, restore, or analyze)
prevents jobs from being submitted and DSA configuration settings from being changed.
Configuration settings and DSC metadata can be restored to the DSC repository from a storage device. If
you abort a DSC repository restore job while the job is in progress or if the restore job fails. DSC triggers a
command to restore all repository tables to their initial state, which is an empty table. The current data in the
DSC repository would be lost
Note:
Before you can recover the DSC Repository, a DSC repository backup job and an export of the repository
backup configuration must have been completed successfully at least once. The export of the repository
backup configuration can only be performed using the DSA command line. Failure to perform a
successful repository backup and an export of the repository backup configuration results in an
unrecoverable DSC repository in the case of a complete disaster.
Scheduling and Configuring a Repository Backup
Prerequisite
Before scheduling a DSC repository backup job, configure a remote target group for the job. Use the
sample_config_repository_backup.xml file to identify the repository backup target group and
have scheduling parameters ready.
The config_repository_backup command allows you to schedule a DSC repository backup job.
Note:
Before you can recover the DSC Repository, a DSC repository backup job and an export of the repository
backup configuration must have completed successfully at least once.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
77
Chapter 3: Teradata DSA Command-Line Interface
Command-Line Interface Overview
1. Type dsc config_repository_backup followed by the parameter, and press Enter.
For more information, see config_repository_backup.
Backing Up the DSC Repository
Prerequisite
Before you back up the DSC repository, run the configure_repository_backup command.
The run_repository_job command allows you to back up the DSC repository.
Note:
Before you can recover the DSC Repository, a DSC repository backup job and an export of the repository
backup configuration must have completed successfully at least once.
1. Type dsc run_repository_job -t backup, and press Enter.
Note:
The run_repository_job command can only be initiated if no operational jobs are running.
Exporting the Repository Backup Configuration
The export_repository_backup_config command exports all configurations associated with setting
up a repository backup job. This includes the system, NetBackup, media servers, and target group associated
with the target selected in config_repository_backup.
Note:
Before you can recover the DSC Repository, a DSC repository backup job and an export of the repository
backup configuration must have completed successfully at least once.
Every time the repository target group is changed, you must run export_repository_backup_config.
1. Type dsc export_repository_backup_config followed by the parameters, and press Enter.
For more information, see export_repository_backup_config.
Restoring the DSC Repository
Prerequisite
A valid DSC repository backup job save set must exist before a DSC repository restore job can be run.
The run_repository_job command allows you to restore the DSC repository.
1. Type dsc run_repository_job followed by the parameters, and press Enter.
For more information, see run_repository_job.
Postrequisite
After the repository restore job successfully completes, perform a tpareset on the repository BAR server
from the Linux command prompt.
About Aborting a DSC Repository Job
Before aborting a DSC repository job, you must have a repository backup job configuration file exported to a
remote location. If you abort a DSC repository restore job while the job is in progress, DSC metadata will be
78
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Chapter 3: Teradata DSA Command-Line Interface
Command-Line Interface Overview
corrupted. DSC triggers a command to restore all repository tables to their initial state, which is an empty
table. This results in loss of data unless user follows disaster recovery procedure to recover DSC repository.
For more information, see Recovering the DSC Repository.
Planning a Job
Creating a job includes choosing several options. There are different considerations before creating each job
type. The following is an overview of some of the considerations.
Backup Jobs
Backup jobs archive objects from a source system to a target group. Target groups are defined by a BAR
administrator in the BAR Setup portlet or command-line interface.
BAR Operations portlet allows you to migrate the object list from an existing ARC script into a backup
job. Objects in that list that exist in the specified source system are automatically selected in the object
browser when a new job is created from the migrated ARC script.
When you run a backup job for the first time or when you change the target group for a backup job, all
data from the specified objects is archived. After this initial full backup, you may choose the backup type:
• Full: Archives all data from the specified objects.
• Delta: Archives only the data that has changed since the last backup operation.
• Cumulative: Archives the data that has changed since the last full backup was run.
Restore Jobs
Restore jobs are based on successful executions of backup jobs and can only be created for a backup job
that has successfully run to completion.
You can define a restore job to always restore the latest version of a backup save set or you can specify a
save set version. A target Teradata system must be selected in order to define the restore job. By default,
all objects from the save set are included in the restore job but the selections can be modified.
Analyze Jobs
An analyze job can employ either a read-only or validate analysis method for each job. An analyze readonly job reads the data from the media device to verify that reads are successful. An analyze validate job
sends the data to the AMPs, where it is interpreted and examined but not restored.
You also need to specify a save set version from a successful backup job run.
Creating or Updating a Job
The create_job command creates a job based on the values you specify for parameters in the command
line or in the XML file. Parameter values you enter in the command line supersede any value you enter for
those parameters in the parameters XML file.
You can also edit an existing job by using the update_job command.
Note:
Before you create a restore or analyze type job, a backup job must have been completed successfully or
run with a warning.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
79
Chapter 3: Teradata DSA Command-Line Interface
Command-Line Interface Overview
1. Type dsc create_job or dsc update_job followed by the parameters, and press Enter. See
create_job or update_job.
2. Type the user name and password, and press Enter.
Automatically Retiring a Job
You can set jobs to automatically retire by enabling the auto_retire in the XML file containing the job
definition.
1. Open the XML file containing the job definition and set the following:
Parameter
Description
auto_retire
Specify true.
Note:
False is the default value and means the job does not retire automatically.
retire_value Specify an integer to represent the number of days or weeks until the job is retired.
Note:
The integer can be no longer than three digits.
retire_units Specify days or weeks.
Including Objects
You can include database objects for any backup, restore, or analyze_validate job by specifying an object
under objectlist in the XML file containing the job definition.
1. Open the XML file containing the job definition information.
2. Under objectlist, specify:
Option
Description
objectinfo
Contains the information on all objects for the job.
object_name The name of the object. The name can have a maximum of 128 characters.
object_type The type of object. The accepted values are:
AGGREGATE_FUNCTION
AUTHORIZATION
COMBINED_AGGREGATE_FUNCTIONS
CONTRACT_FUNCTION
DATABASE
EXTERNAL_PROCEDURE
GLOP_SET
HASH_INDEX
INSTANCE_OR_CONSTRUCTOR_METHOD
JAR
JOIN_INDEX
JOURNAL
80
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Chapter 3: Teradata DSA Command-Line Interface
Command-Line Interface Overview
Option
Description
MACRO
NO_PI_TABLE
ORDERED_ANALYTICAL_FUNCTION
QUEUE_TABLE
SERVER_OBJECT
STANDARD_FUNCTION
STATISTICAL_FUNCTION
STORED_PROCEDURE
TABLE
TABLE_FUNCTION
TABLE_OPERATOR
TRIGGER
USER
USER_DEFINED_METHOD
USER_DEFINED_DATA_TYPE
USER_INSTALLED_FILE
VIEW
parent_name The name of the object parent. The name can have a maximum of 128 characters.
Note:
All objects other than databases and users are required to have a parent name.
parent_type [Optional] The type of parent. The accepted values are:
DATABASE
USER
The following XML sample file excerpt shows how to include the VIEW1 object in the DB1 database.
<objectlist>
<objectinfo>
<object_name>VIEW1</object_name>
<object_type>VIEW</object_type>
<parent_name>DB1</parent_name>
<parent_type>DATABASE</parent_type>
</objectinfo>
Excluding Objects
You can exclude objects from any backup or restore job by amending the excludeobjectinfo in the
XML file containing the job definition.
1. Open the XML file containing the job definition information.
2. Under excludeobjectinfo, specify the object name and object type that you want to exclude.
For example, to exclude tables T1 and T2 from a job that restores database ABC:
<objectinfo>
<object_name>ABC</object_name>
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
81
Chapter 3: Teradata DSA Command-Line Interface
Command-Line Interface Overview
<object_type>DATABASE</object_type>
<parent_name>SYSTEMFE</parent_name>
<parent_type>DATABASE</parent_type>
<!-Optional
-->
<exclude>
<excludeobjectinfo>
<object_name>T1</object_name>
<object_type>TABLE</object_type>
</excludeobjectinfo>
<excludeobjectinfo>
<object_name>T2</object_name>
<object_type>TABLE</object_type>
</excludeobjectinfo>
</exclude>
</objectinfo>
Renaming Tables
You can rename tables in a restore job by adding rename_to to the table in the XML file containing the job
definition.
1. Open the XML file containing the job definition information.
2. Under objectinfo, identify the table to rename and add the new table name inside rename_to.
In the following example, table T1 in the ABC database is restored and renamed as T2:
<objectinfo>
<object_name>T1</object_name>
<object_type>TABLE</object_type>
<parent_name>ABC</parent_name>
<parent_type>DATABASE</parent_type>
<object_attribute_list>
<rename_to>T2</rename_to>
</object_attribute_list>
</objectinfo>
Renaming a Database
You can rename a database in a restore job by adding map_to to the table in the XML file containing the job
definition. This option restores the database to the target system using a new name for the database.
1. Open the XML file containing the job definition information.
2. Under objectinfo, identify the database to rename and add the new database name inside map_to.
In the following example, the objects under database ABC are copied to database NEWBAR:
<objectinfo>
<object_name>ABC</object_name>
<object_type>DATABASE</object_type>
<parent_name>SYSTEMFE</parent_name>
<parent_type>DATABASE</parent_type>
<object_attribute_list>
<map_to>NEWBAR</map_to>
</object_attribute_list>
</objectinfo>
82
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Chapter 3: Teradata DSA Command-Line Interface
Command-Line Interface Overview
Mapping to a Different Database
You can map an object to a different database so that it is restored to the target system under a different user
or database than was originally designated for the backup job. You do this using the map_to tag in the XML
file containing the job definition.
1. Open the XML file containing the job definition information.
2. Under objectinfo, identify the object, and add the new database name inside map_to.
The following example restores table T1, originally backed up from database ABC, to database
NEWBAR:
<objectinfo>
<object_name>T1</object_name>
<object_type>TABLE</object_type>
<parent_name>ABC</parent_name>
<parent_type>DATABASE</parent_type>
<object_attribute_list>
<map_to>NEWBAR</map_to>
</object_attribute_list>
</objectinfo>
Changing Job Options
You can change job options for any backup, restore, or analyze job by amending the job options in the XML
file containing the job definition.
1. Open the XML file containing the job definition information.
2. Under <job_options>, list the necessary job options and place the appropriate value within the job tags.
Note:
True is not an available value for the disable_fallback xml command. Enter the value as follows:
<disable_fallback>false</disable_fallback>.
Note:
If the XML file has the commands <run_as_copy>false</run_as_copy> and <disable_fallback>true</
disable_fallback>, the system displays an error when the job is run.
The following example shows the job options section in the XML file for a restore job:
<job_options>
<!-Optional, true/false -->
<data_phase>DATA</data_phase>
<!-Optional, true/false -->
<enable_temperature_override>true</enable_temperature_override>
<!-Optional, enum type, COLD/WARM/HOT -->
<temperature_override>HOT</temperature_override>
<!-Optional, enum type, DEFAULT/ON/OFF -->
<block_level_compression>ON</block_level_compression>
<!-Optional, true/false -->
<disable_fallback>false</disable_fallback>
<!-Optional, max 2048 characters -->
<query_band>queryband</query_band>
<!-- Optional, starts with upper case ( Error/Info/Debug/Warning )
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
-->
83
Chapter 3: Teradata DSA Command-Line Interface
Command-Line Interface Overview
<dsmain_logging_level>Debug</dsmain_logging_level>
<!-Optional, true/false -->
<nowait>true</nowait>
<!-- 'reblock' - Optional, accepted values: true/false
<!-- Note: Only useful for RESTORE jobs. -->
<reblock>true</reblock>
<!-Optional, true/false -->
<run_as_copy>true</run_as_copy>
<!-Optional, max 32 characters -->
<saveset_user>barem</saveset_user>
<!-Optional, max 32 characters -->
<saveset_password>barem</saveset_password>
<job_options>
-->
DSA Command-Line Interface Job Options
You can select and define job options for backup, restore, and analyze jobs by specifying the values for the
options in the XML file containing the job definition.
Note:
The XML files in the sample library and the files produced by the BAR Operations portlet have the XML
in a specific order. For the jobs to run correctly, do not change the order of the XML within the files.
Note:
The following options are for Teradata only.
Job Options
Description
Job Type
online
Determines the type of backup to perform.
Backup
False
Default. Backs up everything associated with each
specified object while the database is offline. No updates
can be made to the objects during the backup job run.
True
Backs up everything associated with each specified object
and initiates an online archive for all objects being
archived. The online archive creates a log that contains
all changes to the objects while the archive is prepared.
dataphase
Determines the type of backup to perform.
Backup
DATA
Default. Performs a full backup.
DICTIONARY
Performs a dictionary-only backup.
enable_temperature_overri
de
84
Optional setting that pertains to restore jobs only.
Restore
True
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Chapter 3: Teradata DSA Command-Line Interface
Command-Line Interface Overview
Job Options
Description
Job Type
The temperature_override value is applied when
restoring the data.
False
The temperature from the backup is applied.
nosync
Determines where synchronization is done for the job. Only Backup
available for online backup jobs.
• Default value is false. If false, synchronization occurs
across all tables simultaneously. If you try to run a job
that includes objects that are already being logged, the job
aborts.
• True allows different synchronization points. If you try to
run a job that includes objects that are already being
logged, the job runs to completion and a warning is
returned.
dsmain_logging_level
Determines the types of messages that the database job logs.
Error
Backup
Restore
Default. Enables minimal logging. Provides only error
messages.
Warning
Adds warning messages to error message logging.
Info
Provides informational messages with warning and error
messages to the job log.
Debug
Enables full logging. All messages, including Debug, are
sent to the job log.
nowait
Generates warning messages to the job status log for objects Backup
that the Teradata Database fails to get locks on immediately. Restore
Note:
In either case, the job waits until the lock is released.
• Default is true. If true, a warning message is generated
when a job is stopped because of a lock.
• If false, no warning message is generated when a job is
stopped because of a lock.
reblock
Determines level of inserts, either row or block level. This
option is valid only for the same configuration restore.
Different configuration restore always use row level inserts.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Restore
85
Chapter 3: Teradata DSA Command-Line Interface
Command-Line Interface Overview
Job Options
Description
Job Type
• Default is false. The restore uses block level inserts.
• If true, the restore uses row level inserts, forcing
reblocking when tables must be restored matching the
target system's default block size. Required when the
backup is restored to a different AMP configuration, hash
function, hash bucket, row format or block alignment.
Optional when the backup is restored from a system with
different block size than the target system and the block
size of the restored data objects must match the target
system's block size.
query_band
Allows tagging of sessions or transactions with a set of user- Backup
Restore
defined name-value pairs to identify where a query
originated. These identifiers are in addition to the current set
of session identification fields, such as user ID, account
string, client ID, and application name.
Note:
Valid query band values are defined on the database.
temperature_override
Determines the temperature at which data is restored.
Restore
DEFAULT
Default. This data is restored at the default temperature
setting for the system.
HOT
This data is accessed frequently.
WARM
This data is accessed less frequently.
COLD
This data is accessed least frequently.
block_level_compression
Defines data compression used.
Restore
DEFAULT
Default. Applies same data compression as the backup
job if allowed on the target system.
ON
Compress data at the block level if allowed on the target
system.
OFF
Restore the data blocks uncompressed.
86
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Chapter 3: Teradata DSA Command-Line Interface
Command-Line Interface Overview
Job Options
Description
Job Type
disable_fallback
Fallback protection means that a copy of every table row is
maintained on a different AMP in the configuration.
Fallback-protected tables are always fully accessible and are
automatically recovered by the system.
Default is false. Must be entered in lowercase letters.
Restore
Note:
The disable_fallback option only takes effect when the
run_as_copy value is true.
run_as_copy
Allows restore to run as a copy.
Restore
• Default value is false.
• If true, the restore runs as a copy.
Running a Job
1. Type dsc run_job followed by the parameters, and press Enter. See run_job.
Viewing Job Status
The job_status command gets the latest status for a job with the given name and displays it on the screen.
If the job is running, a detailed status message is displayed. If the job is not running, the status of the last run
for that job is displayed.
1. Type dsc job_status followed by the parameters, and press Enter. See job_status.
Considerations for Aborting a Job
You might want to abort a job after the job has been submitted to run. For example, you might have
forgotten to include an object for backup that you did not specify in the job definition XML file. Or you
might see, while a backup job is running, that data will fill the media if the job is completed.
You can use the abort_job command to abort an actively running job or a job in the queue. An abort
command creates two states for the job. While the job state is aborting, devices are being released for other
jobs to use. After the job has reached an aborted state, the job releases DSA stream resources and the DSC
job slot.
If you abort an actively running job, DSA does not create a save set. Any backed files are rolled back to the
state they were in following the last complete, successful run. Third-party backup management software
does not keep records of jobs that are not run successfully.
Notice:
If you try to abort a restore job of a DSC repository backup while the job is in progress, the DSC metadata
is corrupted. DSC triggers a command to restore all repository tables to their initial state, which is an
empty table. You would therefore lose your DSC repository backup data.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
87
Chapter 3: Teradata DSA Command-Line Interface
Command-Line Interface Overview
Aborting a Job
The abort_job command aborts an actively running job, a job in the queue, or a job that is not
responding.
If the abort_job command is given for a job in queue to run, the job is removed from the queue. If the job
is running when the request is sent to abort the job, it can take several minutes for the job to completely
stop.
1. Type dsc abort_job followed by the parameters, and press Enter. See abort_job.
Exporting a Job
The export_job command exports the current XML definition for the requested job.
1. Type dsc export_job followed by the parameters, and press Enter. See export_job.
Retiring a Job
The retire_job command retires an active job. It does not retire a running or queued status job, or one
that is already in the retired state.
1. Type dsc retire_job followed by the parameters, and press Enter. See retire_job.
Activating a Job
The activate_job command activates a retired job, so that it is available.
1. Type dsc activate_job followed by the parameters, and press Enter. See activate_job.
Deleting a Job
The delete_job command deletes a job and any data associated with it from the DSC repository. Any logs
and job history are deleted and cannot be restored. Any backup save sets created for the job that exist on
devices managed by third-party solutions must be deleted manually using the interface for that solution.
Note:
This command only deletes new or retired jobs.
1. Type dsc delete_job followed by the parameters, and press Enter. See delete_job.
88
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
CHAPTER 4
Troubleshooting
Log Files
DSA Logs
Log File
Location
Description
dsc.log
Resides on the DSC server in one of the following
locations:
Logs DSA job metadata
processing.
Defaults:
• $DSA_LOG_DIR
• /var/opt/teradata/dsa/logs
• Log level: Info
• Log file size: 50 MB
• Log file count: 10
Resides on the DSC server or media servers in one Logs DSA Network Client
(ClientHandler) processing.
of the following locations:
clienthandler.log
• $DSA_LOG_DIR
• /var/opt/teradata/dsa/logs
dscCommandline.log Resides on the DSC server or media servers in one Logs the command-line process.
of the following locations:
• $DSA_LOG_DIR
• /var/opt/teradata/dsa/logs
Resides on the DSC server in one of the following
locations:
dsarest.log
Logs the REST server status.
• $DSA_LOG_DIR
• /var/opt/teradata/dsa/logs
Viewpoint Logs
Log File
Location
Description
Viewpoint logs
/var/opt/teradata/viewpoint/portal/
logs
Logs Viewpoint portal information.
Third-Party Backup Application Logs
Log File
Location
dsa_tdbackex.log /var/opt/teradata/dsa/logs/
dsa_tdbackex.log
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Description
NetBackup job log that contains
error messages.
89
Chapter 4: Troubleshooting
TVI Logging
Teradata Database Logs
Log File
Location
Description
BARLog_rsgno_partitionno In one of the following locations:
*.txt
• <PDE-temp-path>
• /var/opt/teradata/
tdtemp/bar
Logs DSMain information, which
can be useful in determining why a
job failed.
When DSMAIN logs are set to the
DEBUG logging level, more than
one log file is created in /var/opt/
Teradata/tdtemp/bar. For
troubleshooting purposes, it is best
to:
1. Gather all the log files in the
directory.
2. Zip the directory on all nodes
involved in the job.
3. Send the files through email to
DSA Frontline.
Location
Description
/var/opt/teradata/tdtemp/
post_restore_<version>
Logs post_data_restore and post_dbc_restore script
information. This information is useful in determining
what happened during the post-restore scripts.
TVI Logging
TVI allows critical failures to be reported to Teradata immediately. Logging by TVI is enabled by default, but
can be disabled by setting the value of the logger.useTviLogger property in the dsc.properties and
clienthandler.properties files to false.
TVI errors that are reported to Teradata DSA are listed below by message ID number. All of the errors above
4000000 are critical. Event codes are supported in CMIC release 11.01.02 or later unless otherwise noted.
1751001
Synopsis: DSC Server was started.
Meaning: DSC Server was started.
Probable Cause: DSC Server was started.
Recommendations: n/a
1751002
Synopsis: DSC was stopped.
Meaning: DSC Server was stopped.
Probable Cause: A service could be stopped manually or during a shut down or reboot, or the DSC
repository is down.
Recommendations: Check the log file for DSC at $DSA_LOG_DIR/dsc.log.
1752001
90
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Chapter 4: Troubleshooting
TVI Logging
Synopsis: Alert for ad-hoc collection of support bundle.
Meaning: DSA support information collected.
Probable Cause: DSA support script was executed to collect support information.
Recommendations: The collected support information can be used for troubleshooting.
17530001
Synopsis: BARNC.
Meaning: BARNC (the DSA Network Client) was started.
Probable Cause: The DSA Network Client was started.
Recommendations: NA
17530002
Synopsis: BARNC.
Meaning: BARNC (the DSA Network Client) was stopped.
Probable Cause: A service could be stopped manually or during a shut down or reboot. DSA Network
Client should start automatically on reboot.
Recommendations: Check the log file for BARNC (DSA Network Client) at $DSA_LOG_DIR/
clienthandler.log.
4751001
Synopsis: DSC and Commons version mismatch
Meaning: DSC version does not match Commons version, DSC failed to start.
Probable Cause: Installed package is using the wrong libraries and is not valid.
Recommendations: Request a new build.
4751002
Synopsis: DSC cannot connect to JMS broker.
Meaning: The JMS broker is unreachable from the DSC.
Probable Cause: ActiveMQ service may be down or DSC may have incorrect JMS broker port/url
configuration.
Recommendations: Check ActiveMQ service is running. Verify JMS broker port/url on DSC.
4751003
Synopsis: JDBC connection to DSC repository failed.
Meaning: Fail to connect to DSC repository.
Probable Cause: DSC repository is unavailable or there is a network error.
Recommendations: Check database is running. Verify logon user/password.
4751005
Synopsis: DSC and DBS version mismatch.
Meaning: DSC version does not match DBS version, DSC version must be equal or higher than DBS
version.
Probable Cause: Installed wrong package.
Recommendations: Upgrade DSC package or downgrade DBS version.
4751006
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
91
Chapter 4: Troubleshooting
BARNC (DSA Network Client) Error Codes
Synopsis: Memory usage of the DSA communication queue has reached the 75% threshold that triggers
TVI logging.
Meaning: DSA hangs.
Probable Cause: DSC consumers are not available when status messages are sent from DSA Network
Client and DSMAIN producers.
Recommendations: Set deleteAllMessagesOnStartup="true" in /opt/teradata/tdactivemq/
config/td-broker.xml. Then restart the ActiveMQ broker.
4753003
Synopsis: BARNC crash.
Meaning: BARNC (DSA Network Client) crashed.
Probable Cause: DSA Network Client crashed due to segmentation violation, broken pipe, or unknown
reasons.
Recommendations: Find related core files in the Teradata core directory.
4753004
Synopsis: NFS full
Meaning: The network file system has run out of space.
Probable Cause: There is a problem writing to file due to lack of disk space.
Recommendations: Free up NFS mounted disk space.
4753005
Synopsis: NFS mount failure
Meaning: There is a network file system mount failure.
Probable Cause: There is a problem writing to file due to an NFS mount error.
Recommendations: Verify that NFS is properly mounted and running.
BARNC (DSA Network Client) Error Codes
BARNC (DSA Network Client) errors are reported in the clienthandler.log and $DSA_LOG_DIR/
dsc.log files.
BSASendData failed
BSAGetData failed
Possible causes: An error occurred in the media management software that prevents the backup or
restore operation from completing successfully. The message includes additional text indicating the
reason for the failure.
Remedy: Check the media management software and backup devices to ensure that they are operating
correctly.
CBB Filter write open failed
BB Filter write seek failed
CBB Filter write failed
Possible Causes (for any of the three errors above): The shared storage area used for Teradata
incremental restore is full or is not properly configured.
92
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Chapter 4: Troubleshooting
Locked Object Restrictions
Remedy: Check that the allocated path is available, has the proper permissions set, and is not full.
Failed to bind any listen sockets.
Possible causes: An attempt to start the DSA Network Client service was made while it was already
running, or another process is using the network port assigned to the DSA Network Client service.
Remedy: Use the /etc/init.d/clienthandler status command to determine if the DSA Network Client
service is running. Also, check that the port number specified in the clienthandler.properties file
is correct.
JMSSessionFactory could not be created
Possible causes: The ActiveMQ message broker service is not active or cannot be reached.
Remedy: Check that the ActiveMQ message broker service for DSA is running, and that network access
between media servers is working properly. The message may include additional information that is
helpful in determining the cause of the failure.
No matching file: "Filename"
Possible causes: An attempt was made to restore an expired or failed backup.
Remedy: Check that the backup of the job being restored was successfully completed and was not deleted
or expired by the media management software.
Read passed end of file
Possible causes: An attempt was made to restore a corrupted or incomplete backup file.
Remedy: Check that the backup being restored was successfully completed. Also, try to run a
read_analyze job on the backup to determine whether the file is readable.
Locked Object Restrictions
The Teradata Database system places locks on objects during the time that transactions take place. For
example, if a transaction is taking place on a table, you cannot run a read operation until the previous
transaction is complete.
If there are locks on objects included in a job, DSA waits indefinitely until the locks are released. When this
happens, the object status in the Log tab of the Job Status view displays "LOCKED" to indicate that the
object is locked. If this situation, you might want to abort the job and run the object_release command
to attempt to release the locks, so that the job can be run when all of the objects can be accessed.
Releasing Locked Objects
The object_release command releases all objects that are currently locked by a job. It does not release
objects for new, running, or queued jobs.
1. Type dsc object_release followed by the parameters, and press Enter.
Note:
The locked objects are released by the object_release command only if the user has access privileges
on the job objects.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
93
Chapter 4: Troubleshooting
Locked Object Restrictions
Parameter
Description
n|name Name
The name of the job on which to perform the action. Must be unique for
each job.
S|Skip_prompt SkipPrompt [Optional] Skips displaying a confirmation message before performing
the command action.
u|user_authentication User Required when security management is enabled. Supplies the command
with the Viewpoint user, and triggers a password prompt for
authentication.
94
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
APPENDIX A
Administrative Tasks
Creating a Diagnostic Bundle for Support
Teradata DSA provides a command-line, interactive script to collect:
• Current time on the server
• Component versions, for the Teradata Database, DSC, DSA Network Client, NetBackup, Viewpoint,
BAR Command Line, ActiveMQ
• General system configurations, such as the number of media servers, DSC server status
• Property files and logs for the DSC component and the DSA Network Client
This information is useful for troubleshooting by Teradata Customer Support.
The dsa_support_info.sh script is installed in the root directory associated with a DSA installation
package:
• $DSA_DSC_ROOT on the DSC server
• $CLIENTHANDLER_ROOT on a media server
• $BARCMDLINE_ROOT on the command line server
1. [Optional] To set up an SSH trust relationship between the DSC server and BAR servers or remote
servers, run dsatrust.sh with the following syntax:dsatrust.sh [-a] [-l <hosts>] [-u
user] [-v] [-h]
Parameter
Description
-a
Sets trusted relations between the DSC server and other BAR servers.
-l hosts
Sets trusted relations to the listed hosts.
Host names must be separated by commas.
-u user
Specify login user name.
You must use this parameter with the -a or -l parameter.
-v
Displays the script version.
-h
Displays help.
2. Run dsa_support_info.sh on a server to collect local information, using the following syntax:
dsa_support_info.sh [-b <nbu jid>][-c][-d <path>][g][-h][-i][-j <job>][-s
<timestamp>][-t][-v][-x]
Parameter
Description
-b nbu jobid
Collects NetBackup job information. Enter the NetBackup job ID as nbu
jobid.
-c
Collects core dumps for the DSA Network Client.
-d path
Specifies the dump directory path.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
95
Appendix A: Administrative Tasks
Time Zone Setting Management
Parameter
Description
Default: /var/opt/teradata/dsa/support/hostname_current
timestamp
-g
Collects general support information.
-h
Displays help.
-i
Collects installation logs.
-j job
Collect job information. Enter the job name as job.
-s timestamp
Collects logs for the date and time specified in the timestamp.
Specify the timestamp in the following format, for example, 2015-05-15
09:50.
-t
Triggers a TVI event.
If the -t parameter has been used to trigger a TVI event, the data is zipped. If
the 50MB limit per TVI event is exceeded, the support bundle is split into
multiple zip files, and a separate TVI event is triggered for each zip file.
The zip files are copied to /var/opt/teradata/SupportBundle/. You
can access the unzipped files in the dump directory specified by the -d
parameter.
-v
Displays the script version.
-x
Bypasses all user prompts.
After the script runs, files specific to the local server are output and can include:
•
•
•
•
•
Component information in .out files
Job output xml and .out files
Log files
Property files
DSA Network Client core dumps
The output directory is /var/opt/teradata/dsa/support/hostname_current timestamp or a
user-specified directory.
Time Zone Setting Management
You must disable Time Zone settings on the destination system before restoring DBC database. Otherwise,
the restoration fails. After restoring DBC database and user data, and then running the necessary DIP script,
you can re-enable Time Zone settings on the destination system, as preferred.
As a first step in the restoration process, check the Time Zone setting status on the destination system.
Checking Time Zone Setting Status
Before restoring DBC database, use the dbscontrol utility to check whether the Time Zone is set on the
destination system.
96
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix A: Administrative Tasks
Time Zone Setting Management
1. From a command prompt on the destination system, run the dbscontrol utility to display general
settings:
#dbscontrol
> display general
18.System TimeZoneString = American Eastern
If the return value for System TimeZoneString is anything except Not Set, as in the above example,
disable the Time Zone setting before restoring DBC database.
Enabling the Time Zone Setting
If you disabled the Time Zone setting on the destination system before restoring DBC database, enable the
setting after the post_data_restore script finishes running.
1. On the destination system, access the directory where you saved the copy of tdlocaledef.txt. For
example:
# cd /opt/teradata/tdat/tdbms/15.10.00.02/etc/
# cd /opt/teradata/tdat/tdbms/15.00.00.00/etc/
# cd /opt/teradata/tdat/tdbms/14.10.05.04/etc/
2. Run the tdlocaledef utility on the saved copy of tdlocaledef.txt. For example:
# /usr/tdbms/bin/tdlocaledef -input tdlocaledef.txt.orig -output new
3. Issue the tpareset command to enable the Time Zone setting:
# tpareset -f set TimeZoneString
4. Run the dbscontrol utility to confirm that the Time Zone setting is enabled. For example:
# dbscontrol
> display general
18. System TimeZone String = American Eastern
Disabling the Time Zone Setting
If the Time Zone setting is enabled on the destination system, disable the setting before restoring DBC
database.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
97
Appendix A: Administrative Tasks
Managing DBC and User Data in the BAR Portlets
1. On the destination system, locate the tdlocaledef.txt file that was used to enable the Time Zone
setting, and access that directory. For example:
# locate tdlocaledef.txt
/opt/teradata/tdat/tdbms/15.10.00.02/etc/tdlocaledef.txt
# cd /opt/teradata/tdat/tdbms/15.10.00.02/etc/
# locate tdlocaledef.txt
/opt/teradata/tdat/tdbms/15.00.00.00/etc/tdlocaledef.txt
# cd /opt/teradata/tdat/tdbms/15.00.00.00/etc/
# locate tdlocaledef.txt
/opt/teradata/tdat/tdbms/14.10.05.04/etc/tdlocaledef.txt
# cd /opt/teradata/tdat/tdbms/14.10.05.04/etc/
2. Save a copy of the current tdlocaledef.txt file. For example:
# cp
tdlocaledef.txt
tdlocaledef.txt.orig
3. In tdlocaledef.txt, remove the value for the TimeZoneString property, leaving only the quotation
marks. For example:
# vi tdlocaledef.txt
// System Time Zone string
TimeZoneString {""}
:wq!
4. Run the tdlocaledef utility on tdlocaledef.txt:
# /usr/tdbms/bin/tdlocaledef -input
tdlocaledef.txt
-output new
5. Issue the tpareset command to disable the Time Zone setting:
# tpareset -f remove TimeZoneString
6. Run the dbscontrol utility to confirm that the Time Zone setting is disabled:
# dbscontrol
> display general
18. System TimeZone String = Not Set
Managing DBC and User Data in the BAR Portlets
The DBC database contains critical system tables that define the user databases in the Teradata Database.
The system tables, views, and macros in the DBC database cannot be selectively backed up and must be
98
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix A: Administrative Tasks
Managing DBC and User Data in the BAR Portlets
backed up in their entirety. The BAR Operations portlet does not allow any of the DBC system objects to be
selected for backup.
Note:
Full user access privileges are required for archiving or restoring DBC.
To back up and restore the DBC database and all user databases, you must create a separate backup job for
the DBC database, then a separate backup job for the other databases. After the backup jobs have
successfully completed, you must create a separate restore job for each of the backup jobs.
Note:
When restoring the DBC backup, the DBC password of the system being backed up is required.
1. Back up the DBC and user data in BAR Operations.
2. Restoring DBC and User Data
Backing Up DBC and User Data
You must create two backup jobs. One includes only the DBC database and one includes all the databases
under DBC and excludes the DBC database automatically.
1. From the BAR Operations Saved Jobs view, create a backup job that saves only the DBC database:
a) Click New Job.
b) Select the Backup job type, then click OK.
c) In the New Backup Job view, enter a job name, such as Backup-DBC-Only.
d) Select a Source System from the list.
e) Enter the user credentials.
f) Select a Target Group from the list.
g) [Optional] Add a Description.
h) Select the DBC database in the Objects tab.
i) [Optional] To verify the parent and objects selected, click the Selection Summary tab.
Note:
Size information is not available for DBC backup jobs. N/A displays as the size value for DBC
backup jobs.
j) [Optional] To adjust job settings for the job, click the Job Settings tab.
k) Click Save.
l) Click
on Backup-DBC-Only and select Run.
2. From the BAR Operations Saved Jobs view, create a backup job that saves the databases under DBC:
a) Click New Job.
b) Select the Backup job type, then click OK.
c) In the New Backup Job view, enter a job name, such as Backup-DBC-All.
d) Select a Source System from the list.
e) Enter the user credentials.
f) Select a Target Group from the list.
g) Select the DBC database in the Objects tab.
next to the DBC database.
h) Click
The Settings dialog box appears.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
99
Appendix A: Administrative Tasks
Managing DBC and User Data in the BAR Portlets
i) Check the Include all children databases and users box.
j) Click OK.
k) Click Save.
l) Click
on Backup-DBC-All and select Run.
3. Save the name of both backup job save sets.
The backup jobs must complete with a COMPLETED_SUCCESSFULLY or WARNING status before
you can create a restore job.
Restoring DBC and User Data
You must create two restore jobs from the two backup job save sets. One includes only the DBC database
and one includes all the databases under DBC and excludes DBC.
The backup jobs must have successfully completed to create restore jobs from the save sets.
Note:
Before restoring the DBC database, the target system must be reinitialized by running SYSINIT.
1.
2.
3.
4.
5.
6.
7.
On the target system, run the DBS Database Initializing Program (DIP) and execute the following:
• For DBS 15.0 and higher, run DIPMIG script, which runs DIPERRS, DIPDB, DIPVIEWS,
DIPVIEWSV, and DIPBAR.
• For DBS 14.10, run DIPERRS, DIPVIEWS, and DIPBAR scripts (1, 4, and 21).
Check the Time Zone setting status on the target system, and disable the setting if it is enabled.
From the BAR Setup portlet, check the activation status of the system that has been reinitialized by
SYSINIT and perform one of the following:
• If the target system is configured and enabled in the BAR Setup portlet, click Deactivate System to
deactivate the target system, and then click Update system selector for JMS Messages.
• If the target system is not configured in the BAR Setup portlet, add the system and click Activate
System before the restore can proceed.
On the target system, start DSMain from the Database Window (DBW) console supervisor screen by
entering:
start bardsmain
After DSMain starts, activate the system in the BAR Setup portlet.
On the target system, from the Database Window (DBW) console supervisor screen, type enable dbc
logons to enable logons for the DBC user only.
From the BAR Operations Saved Jobs view, create a restore job from the backup job save set that saved
only the DBC database:
Click
on the backup job created for DBC only, and select Create Restore Job.
Enter a Job Name, such as Restore-DBC-Only.
Select a Destination System from the list.
When prompted, enter login credentials for the current DBC user and password for the target DBS.
Select a Target Group from the list.
Click Job Settings > Set Credentials to enter the credentials for the DBC user and password of the
source system at the time the backup save set was generated.
g) Click Save.
a)
b)
c)
d)
e)
f)
h) Click
100
on Restore-DBC-Only and select Run.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix A: Administrative Tasks
Managing DBC and User Data in the BAR Portlets
i) If there are any errors, follow the instructions in the log file to correct the problem and run the post
dbc script again.
The post restore script output log files are saved in /var/opt/teradata/tdtemp/
post_restore_dbs version.
8.
Note:
After the DBC restore is complete, the DBC password is set to the source system's DBC password.
From the BAR Operations Saved Jobs view, create a SYSLIB database restore job from the backup job
save set of the databases under DBC, excluding DBC:
on the backup job created for the databases under DBC, and select Create Restore Job.
Click
Enter a Job Name, such as Restore-SYSLIB.
Select a Destination System from the list.
When prompted, enter login credentials for the current DBC user and password for the target DBS.
Select a Target Group from the list.
On the Objects tab, clear the top checkbox, then expand the tree and select the checkbox for only
SYSLIB.
g) Click Save.
a)
b)
c)
d)
e)
f)
9.
h) Click
on Restore-SYSLIB and select Run.
From the BAR Operations Saved Jobs view, create a restore job from the backup job save set of the
databases under DBC, excluding DBC:
Click
on the backup job for the databases under DBC, and select Create Restore Job.
Enter a Job Name, such as Restore-DBC-All.
Select a Destination System from the list.
When prompted, enter login credentials for the current DBC user and password for the target DBS.
Select a Target Group from the list.
For DBS 15.00 or later, clear the checkbox for TD_SERVER_DB in the Objects tab.
TD_SERVER_DB has some dependencies that must be met before it can be restored.
g) Click Save.
a)
b)
c)
d)
e)
f)
h) Click
on Restore-DBC-All and select Run.
i) If there are any errors, follow the instructions in the log file to correct the problem and run the post
data script again.
The post restore script output log files are saved in /var/opt/teradata/tdtemp/
post_restore_dbs version.
10. For DBS 15.00 or later, from the BAR Operations Saved Jobs view, create a TD_SERVER_DB restore
job from the backup job save set of the databases under DBC, excluding DBC:
Click
on the backup job for the databases under DBC, and select Create Restore Job.
Enter a Job Name, such as Restore-TD_SERVER_DB.
Select a Destination System from the list.
When prompted, enter login credentials for the current DBC user and password for the target DBS.
Select a Target Group from the list.
On the Objects tab, clear the top checkbox, then expand the tree and select the checkbox for only
TD_SERVER_DB.
g) Click Save.
a)
b)
c)
d)
e)
f)
h) Click
on Restore-TD_SERVER_DB and select Run.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
101
Appendix A: Administrative Tasks
Managing DBC and User Data in the Command Line
11. On the target system, run the DBS Database Initializing Program (DIP) and execute the DIPALL script.
12. If you disabled the Time Zone setting on the target system, enable it.
13. On the target system, from the Database Window (DBW) consoles supervisor screen, enable logons for
all users.
Managing DBC and User Data in the Command
Line
The DBC database contains critical system tables that define the user databases in the Teradata Database.
The system tables, views, and macros in the DBC database cannot be selectively backed up and must be
backed up in their entirety. The BAR Operations portlet does not allow any of the DBC system objects to be
selected for backup.
Note:
Full user access privileges are required for archiving or restoring DBC.
To back up and restore the DBC database and all user databases, you must create a separate backup job for
the DBC database, then a separate backup job for the other databases. After the backup jobs have
successfully completed, you must create a separate restore job for each of the backup jobs.
Note:
When restoring the DBC backup, the DBC password of the system being backed up is required.
1. Back up the DBC and user data using the command line.
2. Restore DBC and user data using the command line.
Backing Up DBC and User Data
You must create two backup jobs, one for DBC only and one for all DBC user data.
Note:
Before you create a restore job, a backup job must have completed successfully or with a warning.
1. Create a backup job that includes only the DBC database.
Example:dsc create_job -n DBC-Only -f DBCOnlyJob.xml
The following code example excludes all children of DBC by setting the <includeAll> attribute to
false.
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<dscCreateJob xmlns="http://schemas.teradata.com/v2012/DSC">
<job_instance>
<job_name>DBC-Only</job_name>
<job_description>Backup DBC only and exclude all child objects</
job_description>
<job_type>BACKUP</job_type>
<job_state>ACTIVE</job_state>
<auto_retire>false</auto_retire>
<objectlist>
102
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix A: Administrative Tasks
Managing DBC and User Data in the Command Line
<objectinfo>
<object_name>DBC</object_name>
<object_type>DATABASE</object_type>
<parent_name></parent_name>
<parent_type>BACKUP_JOB</parent_type>
<object_attribute_list>
<includeAll>false</includeAll>
</object_attribute_list>
</objectinfo>
</objectlist>
</job_instance>
<source_tdpid>systemname</source_tdpid>
<target_media>1_5_drives</target_media>
<job_options>
<online>false</online>
<data_phase>DATA</data_phase>
<query_band></query_band>
<dsmain_logging_level>Error</dsmain_logging_level>
<nowait>true</nowait>
</job_options>
</dscCreateJob>
2. When prompted, enter login credentials.
3. Run the job.
Example: dsc run_job -n DBC-Only
4. Create a backup job for user data that includes all children of DBC but excludes the DBC database.
Example:dsc create_job -n DBC-All -f DBCAllJob.xml
The following code example includes all children of DBC by setting the <includeAll> attribute to
true.
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<dscCreateJob xmlns="http://schemas.teradata.com/v2012/DSC">
<job_instance>
<job_name>DBC-All</job_name>
<job_description>Backup user data and all objects in DBC</
job_description>
<job_type>BACKUP</job_type>
<job_state>ACTIVE</job_state>
<auto_retire>false</auto_retire>
<objectlist>
<objectinfo>
<object_name>ABC</object_name>
<object_type>DATABASE</object_type>
<parent_name></parent_name>
<parent_type>BACKUP_JOB</parent_type>
<object_attribute_list>
<includeAll>true</includeAll>
</object_attribute_list>
</objectinfo>
</objectlist>
</job_instance>
<source_tdpid>systemname</source_tdpid>
<target_media>1_5_drives</target_media>
<job_options>
<online>false</online>
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
103
Appendix A: Administrative Tasks
Managing DBC and User Data in the Command Line
<data_phase>DATA</data_phase>
<query_band></query_band>
<dsmain_logging_level>Error</dsmain_logging_level>
<nowait>true</nowait>
</job_options>
</dscCreateJob>
5. If prompted, enter login credentials
6. Run the job.
Example: dsc run_job -n DBC-All
Restoring DBC and User Data
You must create two restore jobs, one for DBC only and one for all DBC User Data.
Note:
Before you create a restore job, a backup job must have been completed successfully or run with a
warning.
1.
On the target system, run the DBS Database Initializing Program (DIP) and execute the following:
• For DBS 15.0 and higher, run DIPMIG script, which runs DIPERR, DIPDB, DIPVIEWS,
DIPVIEWSV and DIPBAR.
2.
3.
4.
5.
6.
• For DBS 14.10, run DipErrs, DipViews, and DIPBAR scripts (1, 4, and 21).
Check the activation status of the system running SYSINIT and perform one of the following:
• If the system is already configured and activated in DSA, deactivate the system, add the system
again, and reactivate the system. Re-adding the system will repopulate the connectinfo macro in the
SYSBAR database with the JMS connection information. See Disabling a System or Target Group,
Configuring a System and Node, and Enabling a System or Target Group.
• If the target system is not configured in DSA, the system must be added and activated before the
restore can proceed. See Configuring a System and Node and Enabling a System or Target Group.
Start DSMain on the target system from the Database Window (DBW) consoles supervisor screen by
entering start bardsmain (this starts DSMain).
After bardsmain has started, activate the system by running the following command:
Enter dsc enable_component – n Tivtera –t SYSTEM.
On the target system from the Database Window (DBW) consoles supervisor screen, type enable
dbc logons to enable logons for the DBC user only.
Create a DBC-Only restore job, excluding all children of DBC:
Example:dsc create_job -n Restore-DBC-Only -f RestoreDBCOnlyJob.xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<dscCreateJob xmlns="http://schemas.teradata.com/v2012/DSC">
<job_instance>
<job_name>Restore-DBC-Only</job_name>
<job_description>Restore DBC only and exclude all child objects</
job_description>
<job_type>RESTORE</job_type>
<job_state>ACTIVE</job_state>
<auto_retire>false</auto_retire>
<backup_name> DBC-Only </backup_name>
104
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix A: Administrative Tasks
Managing DBC and User Data in the Command Line
<backup_version>0</backup_version>
<all_backup_objects>true</all_backup_objects>
</job_instance>
<source_media>1_5_drives </source_media>
<target_tdpid>systemname</target_tdpid>
<job_options>
<enable_temperature_override>false</enable_temperature_override>
<temperature_override>DEFAULT</temperature_override>
<block_level_compression>DEFAULT</block_level_compression>
<disable_fallback>false</disable_fallback>
<query_band></query_band>
<dsmain_logging_level>Debug</dsmain_logging_level>
<nowait>true</nowait>
<reblock>false</reblock>
<run_as_copy>false</run_as_copy>
</job_options>
</dscCreateJob>
7.
When prompted, enter login credentials:
Option
Description
Target username
Enter current DBC user for the target DBS
Target password
Enter current DBC password for the target DBS
Is this Restore job a DBC restore? Enter y
8.
9.
Username for the backup
Enter dbc
Password for the backup
Enter DBC password of the source system at the time the backup
saveset was generated
Run the job.
Example: dsc run_job -n Restore-DBC-Only
After the DBC restore job is complete, the DBC password is set to the DBC password of the source
system. The DBC database must restore successfully before you can restore user data.
Create a DBC-All restore job for user data, including all children of DBC.
Example:dsc create_job -n Restore-DBC-All -f RestoreDBCAllJob.xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<dscCreateJob xmlns="http://schemas.teradata.com/v2012/DSC">
<job_instance>
<job_name>Restore-DBC-All</job_name>
<job_description>Restore user data and all objects in DBC</
job_description>
<job_type>BACKUP</job_type>
<job_state>ACTIVE</job_state>
<auto_retire>false</auto_retire>
<backup_name>DBC-All</backup_name>
<backup_version>0</backup_version>
<all_backup_objects>true</all_backup_objects>
</job_instance>
<source_media>1_5_drives</source_media>
<target_tdpid>systemname</target_tdpid>
<job_options>
<enable_temperature_override>false</enable_temperature_override>
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
105
Appendix A: Administrative Tasks
Protecting the DSC Repository
<temperature_override>DEFAULT</temperature_override>
<block_level_compression>DEFAULT</block_level_compression>
<disable_fallback>false</disable_fallback>
<query_band></query_band>
<dsmain_logging_level>Debug</dsmain_logging_level>
<nowait>true</nowait>
<reblock>false</reblock>
<run_as_copy>false</run_as_copy>
</job_options>
</dscCreateJob>
10. When prompted, enter login credentials and run job.
Example: dsc run_job -n Restore-DBC-All
11. On the target system, run the DBS Database Initializing Program (DIP) and execute the DIPALL script.
12. On the target system, from the Database Window (DBW) consoles supervisor screen, enable logons for
all users..
Protecting the DSC Repository
The DSC repository stores all DSA data, including configuration definitions and settings, job definitions, job
status, and job history. Therefore it is critical to protect the data in the DSC repository. Without the DSC
repository, database backup data sets cannot be restored.
The DSC repository should be backed up every day after all of the daily database backup jobs have
completed. The DSC backup data set must also be included in the disaster recovery and data protection
policies of the organization. This includes any vaulting or off-site storage of backup datasets.
1. Configure a target group for your DSC repository backup and schedule a DSC repository backup using
one of the following methods:
• Bar Setup Portlet
• Command line
2. Type dsc export_config -t SYSTEM -n dscnode_Repository_name -f
repository_config_system.xml on the DSC server.
Notice:
Keep repository_config_system.xml in a safe, known location to be used in case of a disaster.
3. Type dsc export_repository_backup_config -f
export_repository_backup_config.xml.
Notice:
Keep export_repository_backup_config.xml in a safe, known location to be imported back
into DSC in case of a disaster.
4. Run a DSC repository backup.
Recovering the DSC Repository
Prerequisite
• A valid DSC repository backup job data set must exist before a DSC restore job can be run.
106
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix A: Administrative Tasks
Recovering the DSC Repository
• The corresponding repository_config_system.xml file for the DSC repository data set must be
present.
• The corresponding export_repository_backup_config.xml file for the DSC repository data set
must be present.
The run_repository_job command allows you to restore the DSC repository. When you restore the
repository, the process completes with a 5269 warning. This is expected behavior.
Note:
If you have lost the BAR database but still have the BARBACKUP database, go directly to the
run_repository_job step to restore the repository.
1.
If your DSC repository is lost, run the following commands in BTEQ to drop the BAR and
BARBACKUP databases:
delete database BAR;
drop database BAR;
delete database BARBACKUP;
drop database BARBACKUP;
2.
3.
4.
5.
6.
7.
Run $DSA_DSC_ROOT/recoverDSCRepository.sh to recreate the DSC repository.
Restart all DSA services to re-establish connection to the DSC repository.
Type dsc import_repository_backup_config -f
export_repository_backup_config.xml, in the directory where DSA is installed install
path/dsa, to import the initial DSC repository configurations.
Note:
The export_repository_backup_config.xml file was created after exporting your initial
DSC repository configuration.
If SSL is not enabled for this system, perform the following. If SSL is enabled for the system, skip to the
next step.
a) Copy the saved repository_config_system.xml file that was exported from the DSC server.
b) Type dsc config_systems -f repository_config_system.xml
c) Type dsc enable_component -n dsc_repository_system_name -t SYSTEM
d) Type dsc enable_component -n dsc_repository_target_group_name -t
TARGET_GROUP
e) Restart the ClientHandler.
After SSL is enabled for this system, perform the following in the BAR Setup portlet:
a) From the CATEGORIES list, click Systems and Nodes and select the repository.
b) Select the Enable SSL over JMS Communication checkbox and enter the TrustStore password.
c) Click Apply and enter the credentials when prompted.
d) Restart DSMain on the DSC repository system as indicated in the warning message.
e) After DSMain has restarted, click Enable in the BAR Setup portlet.
f) From the CATEGORIES list, click Media Servers, select the media server used for repository
backup, and click Apply.
This activates the media server in the background.
Type dsc recover_backup_metadata -n repository_target_group to recover save set
information and recreate the backup job.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
107
Appendix A: Administrative Tasks
Recovering from a Failed or Aborted DSC Repository Restore Job
8.
9.
Type dsc list_recover_backup_metadata -n repository_target_group and verify that
the overall status is COMPLETED.
When the overall status is completed, you can proceed to the next step.
Type dsc run_repository_job followed by the parameters, and press Enter.
Parameters
Description
t|type Type
Enter restore to restore the DSC repository.
v|backup_version
BackupVersion
[Optional] Backup version number. Enter latest or 0 for the latest
save set.
n|name Name
[Optional] Enter the name of the target group for the restore. If not
specified, will restore to default target group set in target group
configuration xml file.
10. After the restore repository job is complete, perform a tpareset on the DSC repository database.
Recovering from a Failed or Aborted DSC
Repository Restore Job
1.
2.
3.
4.
5.
On the DSC server, restart the Teradata Database.
Restart the DSC component on the DSC server.
Resolve the issue that caused the job to fail or abort.
Re-configure and re-enable repository system.
Run the DSC repository restore job.
Running DSA Jobs Using Crontab
You can schedule and run DSA jobs from Crontab. However, you must first create a script with specific
information and create a Crontab entry.
1. In your own directory, create a new script with the following information. Make sure the permission of
this script is 755: # chmod <script> 755:
#!/bin/bash
# This will identify where BARCmdline is installed
# If non-TMSB it will also contain the path to JAVA
source /etc/profile.d/barcmdline-profile.sh
# These will identify where JAVA is on TMSB Servers
if [ -f /etc/profile.d/teradata-jdk7.sh ]; then
source /etc/profile.d/teradata-jdk7.sh > /dev/null 2>&1
fi
if [ -f /etc/profile.d/teradata-jre7.sh ]; then
source /etc/profile.d/teradata-jre7.sh > /dev/null 2>&1
fi
108
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix A: Administrative Tasks
Scheduling Jobs Using NetBackup
# Here specify the specific job you want to run
dsc run_job -n <job_name>
2. Type # crontab -e to edit Crontab.
3. Use Crontab syntax to create an entry at the beginning of the Crontab script.
4. In the Cron entry, run the script that you created in step 1.
Scheduling Jobs Using NetBackup
You must initially configure NetBackup policies and schedule policies in the NetBackup Administration
Console before using the BAR Setup portlet or the command-line interface. For example, you must use the
NetBackup Administration Console to add:
• A policy before using the BAR Setup portlet to assign a policy to a storage device
• A schedule policy before using the tdbackex script to schedule a job
NetBackup Policy Configuration
A NetBackup policy defines the backup criteria for a specific group of one or more clients and is required for
any scheduled job.
To use Teradata DSA, define at least one Teradata policy. A configuration can have a single policy that
includes all clients or there can be many policies, some of which include only one client. For DSA, the
standard and recommended setup is to assign one NetBackup policy per device or file target.
Policy Requirements
When creating a policy, specify the following:
•
•
•
•
•
Storage unit and media to use
Policy Type as Teradata
Backup schedule
Script files to run on the clients
Clients to back up
In addition to the attributes described here, there are other policy attributes to consider. Refer to the
NetBackup System Administrator’s Guide for detailed configuration instructions and information on all
available attributes.
NetBackup Schedule Policy Configuration
A NetBackup schedule policy allows automated scheduling of DSA backup jobs. After you have created a
schedule policy using the NetBackup Administration Console, the schedule policy uses the tdbackex script
that you have created through the DSA command-line interface to initiate the scheduled backup jobs. A
schedule policy is different than the policy that defines devices and dataset retention levels used in a backup
job.
A NetBackup schedule policy can be configured to run one or more backup jobs at a designated time, or
multiple schedules can be created for a single policy.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
109
Appendix A: Administrative Tasks
DSA Job Migration to a Different Domain
When scheduling full, delta, and cumulative backup jobs, at least three policy schedules must be created: one
policy contains the schedule for the full backup, one policy contains the schedule for the delta backup, and
one policy contains the schedule for the cumulative backup.
With incremental backup jobs, the latest full backup data set must be available to restore a subsequent delta
or cumulative data set. This means that the full backup dataset and succeeding delta or cumulative backup
data set must be retained until the latest delta or cumulative data set no longer need to be restored, and a
new full backup data set is present.
Schedule Policy Requirements
When creating a schedule policy, specify the following:
• Host name server value as the media server on which the DSA BAR command-line software is installed
and configured
• Policy Type as Teradata
• Schedule type as Automatic Backup
• Schedule properties
• Add Client (the client is the media server on which the DSA BAR command-line software is installed)
• Add Backup Selection and enter the job name and backup type (FULL, DELTA, or CUMULATIVE) that
will be executed by the selected schedule policy
DSA Job Migration to a Different Domain
You can migrate images pointing to a specific DSA backup job in one DSA domain to a different DSA
domain. The Administrator user can export backup job metadata (job definition, save sets, and targets) in
XML format using the export_job_metadata command. The exported backup job metadata files,
together with the storage media, can then be sent to the site where the other DSA domain is located. Users
can use the import_job_metadata command on the DSA command line to create the same job and
associated targets and save sets with the exported backup job metadata XML. The import of job metadata is
handled by the other DSA environment in the following order:
1. Virtual systems, media servers, NetBackup servers, and targets are saved into the DSC Repository.
2. The job definition is saved into the DSC repository.
3. Job save sets are saved into the DSC repository.
After the job metadata has been successfully imported using the command import_job_metadata, the
config_target_group_map command should be used to map the exported virtual targets to one of the
physical targets in the new DSA domain. The new backup job that was imported can be used to create a
restore job using the mapped target. The save set metadata on tape of the migrated backup job can also be
validated using the command validate_job_metadata.
You can generate a job plan that uses multiple task-sets based on the streams soft limits set on the source
and target. For best performance, use one-to-one mappings from the tapes or disk storage through the new
target groups and streams passing the data to the AMPs. You can temporarily modify the streams soft limits
and target groups to match on the source and target system.
Restoring to a Different DSA Domain
Prerequisite
For a backup job to be restored to a different DSA domain:
110
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix A: Administrative Tasks
DSA Job Migration to a Different Domain
• The backup job must have completed with a status of COMPLETED_SUCCESSFULLY,
COMPLETED_ERRORS, or WARNING in the original DSA configuration.
• NetBackup save sets must not be expired.
• The DSA user ID and user account must be the same in the new domain as in the original domain
because the new domain requires the same permissions to access the files created by the original
domain. The default DSA user ID is 600. The default DSA user account is dscuser.
• Job names must be unique in a DSA domain. For example, Prod_test1 cannot be imported into
different domains if a job named Prod_test1 already exists in the target domain without resulting in
an error. However, multiple versions of the same job name can be imported into different domains.
The following procedure restores backup jobs to a different DSA domain.
1.
2.
3.
4.
In the original DSA domain, export the backup job definition and associated targets and save sets using
the export_job_metadata command.
When the export_job_metadata or import_job_metadata command is running, no other
operations or commands can run at the same time. In addition, export_job_metadata and
import_job_metadata cannot run if an operational job or repository job is running.
Query the backup IDs.
Migrate the exported metadata files.
The exported job metadata files must exist on a DSA Teradata Managed Server for BAR in the
destination DSA environment. If network connectivity exists between the two environments, copy the
files using the file transfer mechanism allowed between the environments and ensure the time/date
stamps are preserved. Alternately, the target environment can perform an NFS mount of the source
disk file system.
Migrate the images to the new DSA location.
For NetBackup, you can transfer the backup image data by physical tape transfer or electronic media
transfer:
• Physical tape transfer: Export the NetBackup images using the NetBackup feature for exporting
tapes. The NetBackup save sets must not be expired.
• Electronic media transfer: If NetBackup Auto-Image Replication (AIR) is active between the two
domains, the images are automatically migrated.
5.
6.
7.
8.
9.
For DSU, if network connectivity exists between the two environments, copy the saveset files using the
file transfer mechanism allowed between environments and ensure the time/date stamps are preserved.
Alternately, the target environment can perform an NFS mount of the source disk file system.
When a Data Domain unit is used as an NFS server or with DD Boost, replicate the images from source
to target using mtree replication. See Data Domain documentation for instructions on how to set up
mtree replication.
In the new DSA domain, import the backup job definition and associated targets and save sets using
the import_job_metadata command.
The new DSA domain requires physical systems, media servers, NetBackup servers, and target groups
to be configured so that you can import data to the new domain.
In the new DSA domain, map the virtual target group to a physical target group using the -V option
with the config_target_group_map command.
In the new DSA domain, run list_jobs -t migrated to see the migrated jobs.
Validate the metadata against the NetBackup data.
Run list_components -e true to see available system and target groups.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
111
Appendix A: Administrative Tasks
DSA Job Migration to a Different Domain
a) If the system and target groups required for the restore job are not available, run config_system
and config_target_groups to enable them.
10. Create a restore job based on the migrated backup job.
Exporting Job Metadata
To migrate a job to another DSA environment, you must first export the job metadata (job definition,
targets, and save sets).
1. Type dsc export_job_metadata using only the name parameter to export all metadata files for the
job, and press Enter.
Parameters include:
Parameters
Description
n|name Name
The name of the job on which to perform the action. The job must be
active.
d|directory DirectoryPath
[Optional] Directory where the files are exported to or imported from.
v|backup_version
BackupVersion
[Optional] Backup version number. Enter latest or 0 for the latest
save set.
t|type Type
[Optional] The type of job metadata. You may enter JOB (for the job
definition), SAVESET , or TARGET. If no value is specified, all three
types of metadata are included.
u|user_authentication User
Required when security management is enabled. Supplies the
command with the Viewpoint user, and triggers a password prompt
for authentication.
Querying NetBackup Backup IDs
NetBackup requires the backup IDs associated with the save set when save set images are being transferred.
To get the IDs, you must query, then list the backup IDs.
Note:
You can use the query_nbu_backupids command in this procedure, or the query_backupids
command to retrieve NetBackup backup IDs.
1. Type dsc query_nbu_backupids -n name.
cbar6:/ # dsc query_nbu_backupids -n SDBA_Weekly
In the example, NetBackup is queried for the backup IDs associated with the SDBA_Weekly save set. The
IDs are stored in the DSC repository.
2. To list the backup IDs, type list_query_nbu_backupids -n name.
cbar6:/ # dsc list_query_nbu_backupids -n SDBA_Weekly
The following example resulted from the query to list backup IDs associated with the SDBA_Weekly save
set.
Data Stream Controller Command Line 15.00.00.03
Command parameters:
112
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix A: Administrative Tasks
DSA Job Migration to a Different Domain
-name : SDBA_Weekly
Connected to DSC version 15.00.00.03
Listing query NBU backup ids...
Job Name: SDBA_Weekly
Job Execution Id: 1
Media Server: cbar6
Backup ID
Logical File Name
File Size
Date
-------------------------------------------------------------------------------------------------------------------------------------------cbar6_1413483708
SDBA_Weekly_1_1_file10_data_1413483639839
98.3 MB
10/16/14 11:21:48 AM
cbar6_1413483668
SDBA_Weekly_1_1_file11_data_1413483639839
104 MB
10/16/14 11:21:08 AM
cbar6_1413483720
SDBA_Weekly_1_1_file12_data_1413483639839
102 MB
10/16/14 11:22:00 AM
cbar6_1413483666
SDBA_Weekly_1_1_file13_data_1413483639839
92.4 MB
10/16/14 11:21:06 AM
cbar6_1413483675
SDBA_Weekly_1_1_file14_data_1413483639839
100 MB
10/16/14 11:21:15 AM
cbar6_1413483673
SDBA_Weekly_1_1_file15_data_1413483639839
95.1 MB
10/16/14 11:21:13 AM
…….
cbar6_1413483711
SDBA_Weekly_1_1_file95_data_1413483639839
103 MB
10/16/14 11:21:51 AM
cbar6_1413483715
SDBA_Weekly_1_1_file96_data_1413483639839
101 MB
10/16/14 11:21:55 AM
cbar6_1413483709
SDBA_Weekly_1_1_file9_data_1413483639839
98.8 MB
10/16/14 11:21:49 AM
cbar6_1413483647
SDBA_Weekly_1_1_file_dict_1413483639839
88.4 KB
10/16/14 11:20:47 AM
The SDBA_Weekly save set was generated from a 96 stream backup job. There are 97 backup IDs
associated with the save set: 96 data stream files and 1 dictionary stream file.
Querying Backup IDs
The backup ID associated with a save set is required when a save set image is transferred. To get the IDs, you
must query, then list the backup IDs. The query_backupids command does not return any data. You
must run the list_query_backupids command to retrieve the IDs.
You can use the following procedure to retrieve DD Boost, disk file system, and NetBackup backup IDs.
1. Type dsc query_backupids -n name.
2. To list the backup ID, type list_query_backupids -n name.
Importing Job Metadata
Importing the metadata of a backup job (job definition, targets, and save sets) enables you to restore it in a
different DSA environment.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
113
Appendix A: Administrative Tasks
DSA Job Migration to a Different Domain
The data must be imported in the following order:
1. Targets
2. Job definition
3. Save sets
1. Type dsc import_job_metadata using only the name parameter to import all metadata files for the
job, and press Enter.
Parameters include:
Parameters
Description
n|name Name
The name of the job on which to perform the action. The job must be
active.
d|directory DirectoryPath
[Optional] Directory where the files are exported to or imported from.
v|backup_version
BackupVersion
[Optional] Backup version number. Enter latest or 0 for the latest
save set.
t|type Type
[Optional] The type of job metadata. You may enter JOB (for the job
definition), SAVESET , or TARGET. If no value is specified, all three
types of metadata are included.
u|user_authentication User
Required when security management is enabled. Supplies the
command with the Viewpoint user, and triggers a password prompt
for authentication.
Updating a Target Group Map
To access imported save sets, the target group used in the source DSA domain must be mapped to a target
group in the destination DSA domain.
1. To view the virtual components that are available for target group mapping, type dsc
list_components -t system -V.
dsc list_components -t nbu_server -V
2. To map a virtual target group to an existing, physical target group in the destination domain, type dsc
config_target_group_map -f file -V.
cbar3:/SDBA_Weekly_imports # dsc config_target_group_map -f cbar6.xml -V
A sample map, sample_target_map.xml, is supplied in the DSC sample library. Target group map
tags are described in the following table.
Target Group Map Tag
Description
master_source_target_name The backup target group name.
114
master_source_dsc_id
The ID of the source DSC repository.
master_dest_target_name
The restore target group name.
target_group_maps
A sub target grouping consisting of a backup and restore
configuration pair.
source_mediaserver_name
The backup media server.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix A: Administrative Tasks
DSA Job Migration to a Different Domain
Target Group Map Tag
Description
source_policy_class_name
The third party backup application policy associated with the
backup media server.
dest_mediaserver_name
The restore media server.
dest_policy_class_name
The third party backup application policy associated with the
restore media server.
3. To list the target group map, type dsc list_target_group_map -V.
cbar3:/SDBA_Weekly_imports # dsc list_target_group_map -V
Validating Job Metadata [Deprecated]
Job metadata imported to a new DSA environment should be validated against the original exported
metadata before you attempt to restore the job.
Note:
The validate_job_metadata command only works with NetBackup. To validate a save set for a
remote file system and DD Boost, run an analyze read job.
1. Type dsc validate_job_metadata followed by the parameters, and press Enter.
Parameters
Description
n|name Name
The name of the job on which to perform the action. The job must be
active.
v|backup_version
BackupVersion
[Optional] Backup version number. Enter latest or 0 for the latest save
set.
V
[Optional] Indicates that this is a virtual target group.
d|destination SampDest
Required if the virtual target group (parameter V) is specified. It indicates
the physical target group to which the virtual target group is mapped.
In the following example, SDBA_Weekly is the job name and tg_cbar3 is the destination: cbar3:/
SDBA_Weekly_imports # dsc validate_job_metadata -n SDBA_Weekly -V -d
tg_cbar3.
2. Type dsc list_validate_job_metadata followed by the parameters, and press Enter.
Parameters
Description
n|name Name
The name of the job on which to perform the action. The job must be
active.
v|backup_version
BackupVersion
[Optional] Backup version number. Enter latest or 0 for the latest
save set.
In the following example, SDBA_Weekly is the job name: cbar3:/ SDBA_Weekly_imports # dsc
list_validate_job_metadata -n SDBA_Weekly.
3. To validate the images are available in the target environment, choose one of the following actions.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
115
Appendix A: Administrative Tasks
DSA Job Migration to a Different Domain
System or Application Action
NetBackup
Type bplist -l -t 26 -C client -R /*JOB_NAME*/*.
Disk File System
Perform an ls of the job name on the NFS mounted storage.
Data Domain
Check the status of the mtree replication on the Data Domain unit. See Data
Domain documentation for more information.
Restoring a Migrated Job
When creating a restore job from the migrated save set, the restore job must specify the target group that
was mapped in the source and destination domains.
If the save set is being restored to a database system that is different than the source, set the run_as_copy
option to true. The database object must exist on the target system with perm space allocated before the
copy job runs.
1. Type dsc create_job followed by the parameters, and press Enter.
Parameter
Description
n|name Name
The name of the job on which to perform the action. Must be unique
for each job.
d|description Description
[Optional] A meaningful description of the job. To allow a multi-word
description, add \" before and after the description string. \"A
description of Job 1\"
t|type Type
restore
run_as_copy
true
b|backup_name BackupName [Optional] An existing backup job name. For restore, analyze_read,
and analyze_validate jobs only.
v|backup_version
BackupVersion
[Optional] Backup version number. For restore, analyze_read, and
analyze_validate jobs only. Enter latest or 0 for the latest save set.
f|file File
The full file path and file name of the file containing the necessary
parameters to create the job. If the same parameters are provided both
in the file and on the command line, Teradata DSA uses the values
specified in the command line.
u|user_authentication User
Required when security management is enabled. Supplies the
command with the Viewpoint user, and triggers a password prompt for
authentication.
2. Type the user name and password, and press Enter.
3. Type dsc run_job and press Enter.
Restoring a Migrated Job
The following example restores a migrated job called SDBA_Weekly.
116
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix A: Administrative Tasks
DSA Job Migration to a Different Domain
Note:
True is not an available value for the disable_fallback xml command. Enter the value as follows:
<disable_fallback>false</disable_fallback>.
Note:
If the XML file has the commands <run_as_copy>false</run_as_copy> and <disable_fallback>true</
disable_fallback>, the system displays an error when the job is run.
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<dscCreateJob xmlns="http://schemas.teradata.com/v2012/DSC">
<job_instance>
<job_name>rstr_air_job</job_name>
<job_description></job_description>
<job_type>RESTORE</job_type>
<job_owner>dscuser</job_owner>
<job_state>ACTIVE</job_state>
<auto_retire>false</auto_retire>
<backup_name>SDBA_Weekly</backup_name>
<backup_version>0</backup_version>
<all_backup_objects>true</all_backup_objects>
</job_instance>
<source_media>tg_cbar3</source_media>
<target_tdpid>recall</target_tdpid>
<job_options>
<enable_temperature_override>false</enable_temperature_override>
<temperature_override>DEFAULT</temperature_override>
<block_level_compression>DEFAULT</block_level_compression>
<disable_fallback>false</disable_fallback>
<query_band></query_band>
<dsmain_logging_level>Error</dsmain_logging_level>
<nowait>true</nowait>
<reblock>false</reblock>
<run_as_copy>true</run_as_copy>
</job_options>
</dscCreateJob>
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
117
Appendix A: Administrative Tasks
DSA Job Migration to a Different Domain
118
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
APPENDIX B
Rules for Restoring and Copying Objects
Selective Backup and Restore Rules
The table below lists individual object types, whether they can be restored or copied, and under what
conditions. When performing a selective backup and restore on a table or database, observe the following
rules.
• Methods and UDTs are stored in SYSUDTLIB; SYSUDTLIB is only backed up or restored as part of a
DBC backup.
• Although an operation might be allowed, there is no guarantee the object will function as desired after
the operation is complete.
Object Type
Restore to Same
DBS 1
Copy to Same DBS
Copy and Rename to Copy to Different
Same DBS
DBS
Copy and Rename to
Different DBS
DB
Level
Object
Level
DB Level Object
Level
DB Level
Object
Level
DB Level Object
Level
DB Level Object
Level
Aggregate Function
Yes
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
No
JAR
Yes
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
No
External Stored
Procedures
Yes
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
No
Standard Function
Yes
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
No
Trigger
Yes
Yes
No
No
No
No
No
No
No
No
Instant or Constructor
Method
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
Join Index
Yes
Yes
Yes
Yes
No
No
Yes
Yes
No
No
Macro
Yes
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
No
Hash Index
Yes
Yes
Yes
Yes
No
No
Yes
Yes
No
No
Stored Procedure
Yes
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
No
Queue Table
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Table Function
Yes
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
No
Table Function
(Operator)
Yes
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
No
Table
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
PPI Table
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
NOPI Table
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
User-Defined Data Type N/A
(UDT)
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
View
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
No
Yes
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
119
Appendix B: Rules for Restoring and Copying Objects
Selective Backup and Restore Rules
Object Type
Restore to Same
DBS 1
Copy to Same DBS
Copy and Rename to Copy to Different
Same DBS
DBS
Copy and Rename to
Different DBS
DB
Level
Object
Level
DB Level Object
Level
DB Level
Object
Level
DB Level Object
Level
DB Level Object
Level
Authorization
Yes
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
No
Parser Contract
Function
Yes
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
No
System Join Index 2
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
SQL-H 3,4
Yes
Yes
Yes
Yes
No
No
Yes
Yes 4
No
No
User Installed File (UIF)
Yes
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
No
1 DBS indicates a Teradata Database system.
2 System Join Indexes (SJIs) can be copied to a different database and the database name can be changed but the object name cannot be
changed.
3 All dependent objects used by the SQL-H object must exist prior to the copy.
4 To restore a backup job that contains a SQL-H object to a different database, the backup job source database must be the TD_SERVER_DB
database.
Column Legend
Restore to Same DBS
Restore back to the same database name, same object name, and same Teradata Database system
Copy to Same DBS
Copy back to the same database name, same object name, and same Teradata Database system
Copy and Rename to Same DBS
Map to the same Teradata Database system and change either the database name, object name, or both
Copy to Different DBS
Copy back to the same database name and same object name in a different Teradata Database system
Copy and Rename to Different DBS
Map to a different Teradata Database system and change either the database name, object name, or both
120
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
APPENDIX C
XML Values
Values for XML Elements
Many DSA commands read an XML file that specifies setup information or job details by specifying values
for XML elements.
The following table indicates accepted values, case sensitivity, and character number restrictions for XML
element values. These are in accordance with typical XML standards for Teradata Database.
XML Element
bar_media_server
policy_class_name
saveset_accountid
saveset_password
saveset_user
source_media
target_entity
Requirements
Maximum of 32 characters
target_media
target_name
target_tdpid
media_server_name
netbackup_media_server
target_group_name
Maximum of 32 characters, alphanumeric and "-", "_", and "."
The first character of the name can be alphanumeric (a-z, A-Z, and
0-9) only.
auto_retire
Accepted values: true, false. Must be entered in lowercase letters.
barnc_logging_level
Accepted values: Error, Info, Debug, Warning.
blob_container
Maximum of 63 characters. Must be entered in lowercase letters.
block_level_compression
Accepted values: DEFAULT, ON, OFF. Must be entered in uppercase
letters.
database_name
Maximum of 50 characters
database_query_method
Accepted values: BASE_VIEW, EXTENDED_VIEW. Must be entered
in uppercase letters.
data_phase
Accepted values: DATA, DICTIONARY. Must be entered in uppercase
letters.
disable_fallback
Accepted value: false. Must be entered in lowercase letters.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
121
Appendix C: XML Values
Values for XML Elements
XML Element
Requirements
Note:
True is not an available value for the disable_fallback xml
command. Enter the value as follows: <disable_fallback>false</
disable_fallback>.
122
dsc_logging_level
Accepted values: Error, Info, Debug, Warning.
dsmain_logging_level
Accepted values: Error, Info, Debug, Warning.
enable_temperature_override
Accepted values: true, false. Must be entered in lowercase letters.
frequency_units
Accepted values: DAYS, WEEKS. Must be entered in uppercase letters.
frequency_value
Value must be between 1-999. Defaults to 7.
ip_address
Maximum of characters 64, alphanumeric plus ".", ":"
is_delete_after
Accepted values: Y/N. Must be entered as an uppercase letter.
is_enabled
Accepted values: true, false. Must be entered in lowercase letters.
is_encrypted
Accepted values: Y/N. Must be entered as an uppercase letter.
job_description
Maximum of 256 characters
job_name
Maximum of 128 characters
job_type
Accepted values: BACKUP, RESTORE, ANALYZE_VALIDATE,
ANALYZE_READ. Must be entered in uppercase letters.
job_owner [Deprecated]
Maximum of 64 characters
netmask
Maximum of 64 characters, alphanumeric plus ".", ":"
node_name
Maximum of 32 characters, alphanumeric and "-", "_", and "."
The first character of the name can be alphanumeric (a-z, A-Z, and
0-9) only.
nosync
Accepted values: true, false. Must be entered in lowercase letters.
nowait
Accepted values: true, false. Must be entered in lowercase letters.
object_name
Maximum of 128 characters
object_type
Must be entered in uppercase letters. Accepted values are:
AGGREGATE_FUNCTION, AUTHORIZATION,
COMBINED_AGGREGATE_FUNCTIONS,
CONTRACT_FUNCTION, DATABASE,
EXTERNAL_PROCEDURE, GLOP_SET, HASH_INDEX,
INSTANCE_OR_CONSTRUCTOR_METHOD, JAR, JOIN_INDEX,
JOURNAL, MACRO, NO_PI_TABLE,
ORDERED_ANALYTICAL_FUNCTION, QUEUE_TABLE,
SERVER_OBJECT, STANDARD_FUNCTION,
STATISTICAL_FUNCTION, STORED_PROCEDURE, TABLE,
TABLE_FUNCTION, TABLE_OPERATOR, TRIGGER, USER,
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix C: XML Values
Values for XML Elements
XML Element
Requirements
USER_DEFINED_METHOD, USER_DEFINED_DATA_TYPE,
USER_INSTALLED_FILE, VIEW
online
Accepted values: true, false. Must be entered in lowercase letters.
parent_name
Maximum of 128 characters
parent_type
Accepted values: DATABASE, USER. Must be entered in uppercase
letters.
partition_name
Maximum of 63 characters
policy_class_name
Maximum of 128 characters, alphanumeric and "-", "_"
port
Maximum of 5 digits. Must be between 1 and 65535
port_number
Port number range 1-2147483647
prefix_name
Maximum of 256 characters
queen_node
Maximum of 256 characters
query_band
Maximum of 2,048 characters
reset_node_limit
Accepted values: true, false. Must be entered in lowercase letters.
retire_units
Accepted values: DAYS, WEEKS. Must be entered in uppercase letters.
run_as_copy
Accepted values: true, false. Must be entered in lowercase letters.
schema_name
Maximum of 63 characters
skip_archive
Accepted values: true, false. Must be entered in lowercase letters.
skip_force_full
Accepted values: true, false. Must be entered in lowercase letters.
start_am_pm
Accepted values: AM, PM. Must be entered in uppercase letters.
start_time
Maximum of 5 characters. Values must be between 1:00 and 12:00
(AM and PM are entered in the start_am_pm element )
storage_account
Maximum of 24 characters. Must be entered in lowercase letters.
storage_devices
Maximum of 3 characters, numeric range between 1-999
storage_type
Accepted values: cool, hot. Must be entered in lowercase letters.
system_name
Maximum of 32 characters, alphanumeric and "-", "_", and "."
If the first character of the system name is "-", for example -test,
enter the system name at the Enter the System Name prompt.
table_name
Maximum of 63 characters
target_accountid
Maximum of 30 characters
temperature_override
Accepted values: DEFAULT, HOT, WARM, COLD. Must be entered
in uppercase letters.
third_party_media_type
Accepted value: net_backup. Must be entered in lowercase letters.
third_party_server_name
Maximum of 30 characters, alphanumeric and "-", "_"
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
123
Appendix C: XML Values
Values for XML Elements
124
XML Element
Requirements
threshold_units
Accepted values: MB, GB
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
APPENDIX D
Teradata DSA Commands
About Using DSA Commands
There are two major types of DSA commands:
• Configuration commands for configuring systems, servers, target groups, and the DSC repository
• Management and reporting commands for creating, executing, updating, and monitoring jobs
Use a DSA command by typing it in the command line, and then typing in any necessary parameters. There
are complex mappings between BAR components, so the configuration commands must read attributes
from an XML file. Configuration commands must specify the XML file name and location. Some of the
management and reporting commands also accept XML files in which you can specify additional attributes
that cannot be entered as parameters in the command line.
The reference topic for each DSA command provides the command's purpose, syntax, an example of what
you might enter, a list of necessary and optional parameters, and a sample XML file, if the command
requires one. Note the following conventions for using the command line:
• In the Syntax description, the word following a parameter is a placeholder that indicates that you must
enter a value for the parameter. For example, where the syntax says Type, you might enter backup to
specify a backup job for the job type parameter in the command line.
• Unless a parameter is listed as [Optional], you must specify a value for it. For commands that support
XML files, you may specify values for parameters in the XML rather than directly in the command line. If
you specify different parameter values in both the command line and in the XML, the values in the
command line are used.
• Every parameter can be typed as a word or as a single character. A vertical bar indicates that you can use
either format. For example, n|name means that you can type name or n to enter the name parameter.
• Always type a hyphen before each parameter that you enter. For example, after the command name, type
-file or -f to specify the file parameter.
• The value for the file parameter must specify the filename and location of the input or output XML file.
• The Example section shows literal examples of what you might enter at the command line prompt. For
example: dsc.sh config_systems -f configSystem.xml
• The XML File Example sections show samples of XML input files that certain commands require. Some
commands export information as output to an XML file. Output file examples are not shown in this
section.
abort_job
Purpose
The abort_job command aborts an actively running job, a job in the queue, or a job that is not
responding.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
125
Appendix D: Teradata DSA Commands
abort_job
Syntax
abort_job -n|-name Name [parameters]
Examples
dsc abort_job -n job1 -S
dsc abort_job -B -S
Parameters
n|name Name
The name of the job on which to perform the action. Must be unique for each job.
O|override
[Optional] Can only be used for backup, restore, and analyze_validate jobs. A Teradata job
must be in the aborting phase.
Note:
Overriding the abort_job only changes the status in the repository and does not affect the
database.
Note:
You should only use the override parameter when a job hangs, so that DSC resources can
be freed to run other jobs.
S|Skip_prompt SkipPrompt
[Optional] Skips displaying a confirmation message before performing the command action.
B|repository_backup Repository Backup
[Optional] Aborts any running DSC repository job. The name of the job is not required
when this parameter is used.
u|user_authentication user
Required when security management is enabled. Supplies the command with the Viewpoint
user, and triggers a password prompt for authentication.
Usage Notes
There are two phases to aborting a job: aborting and aborted. The aborting phase releases media devices to
the pool. The aborted phase releases node stream limits.
If the abort_job command is given for a job in queue to run, the job is removed from the queue. If the job
is running when the request is sent to abort the job, it can take several minutes for the job to completely
stop.
XML File Example
This command does not support an XML file.
126
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix D: Teradata DSA Commands
activate_job
activate_job
Purpose
The activate_job command activates a retired job, so that it is available.
Syntax
activate_job -n|-name Name [parameters]
Example
dsc activate_job -n job1 -S
Parameters
n|name Name
The name of the job on which to perform the action. Must be unique for each job.
Example: job1
S|Skip_prompt SkipPrompt
[Optional] Skips displaying a confirmation message before performing the command action.
u|user_authentication User
Required when security management is enabled. Supplies the command with the Viewpoint
user, and triggers a password prompt for authentication.
Usage Notes
The activate_job command sets the state of a retired job to active, which allows you to run the job.
You cannot activate a job that is already in the active state.
XML File Example
This command does not support an XML file.
config_aws
Purpose
The config_aws command configures the Amazon S3 Server in the DSA repository based on parameter
files.
Syntax
config_aws -f/-file FILE
Example
config_aws -f file1.xml
Parameters
f|file File
The full path and name of the file containing the necessary configuration parameters.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
127
Appendix D: Teradata DSA Commands
config_aws
u|user_authentication User
Required when security management is enabled. Supplies the command with the Viewpoint
user, and triggers a password prompt for authentication.
Usage Notes
You will need to enter an access_id and access_key for the AWS S3 application. See http://
docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey.
dsc config_aws -f aws.xml
Data Stream Controller Command Line 15.11.02.00
Command parameters:
-file : aws.xml
Enter access id for account "acctName".
AKIAJYGWLNCJDDZ35AXQ
Enter access key for account "acctName".
*******
Connected to DSC version 15.11.02.00
Saving AWS Backup Application configuration...
XML File Example
A representative XML file is shown below. Note the following:
• See http://docs.aws.amazon.com/AmazonS3/latest/UG/s3-ug.pdf for S3 usage information.
• The prefix_name should be followed by "/" to be used as a folder.
<?xml version="1.0" encoding="UTF-8"?>
<dscConfigAmzS3 dscVersion="" xmlns="http://schemas.teradata.com/v2012/DSC"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://
schemas.teradata.com/v2012/DSC DSC.xsd ">
<config_aws_list>
<!-- character must be alphanumeric, - Required, max characters 32 -->
<account_name>acctName</account_name>
<!-- must be region enum -->
<buckets_by_region>
<!--->
<region>us-east-1</region>
<buckets>
<!-- character must be alphanumeric, - Required, max characters 512 -->
<bucket_name>bucketName</bucket_name>
<prefix_list>
<!-- character must be alphanumeric, - Required, max characters 256 -->
<prefix_name>prefix</prefix_name>
<!-- Required, Integer, min value is 1-->
<storage_devices>1</storage_devices>
</prefix_list>
</buckets>
128
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix D: Teradata DSA Commands
config_azure
</buckets_by_region>
</config_aws_list>
</dscConfigAmzS3>
config_azure
Purpose
The config_azure command configures the Microsoft Azure Server in the DSA repository based on
parameter files.
Syntax
config_azure -f|-file FILE
Example
config_azure -f file1.xml
Parameters
f|file File
The full path and name of the file containing the necessary configuration parameters.
u|user_authentication User
Required when security management is enabled. Supplies the command with the Viewpoint
user, and triggers a password prompt for authentication.
Usage Notes
You will need to enter an access_key for the Azure storage account. See https://docs.microsoft.com/en-us/
azure/storage/storage-create-storage-account
XML File Example
A representative XML file is shown below. Note that the prefix_name should be followed by "/" to be used as
a folder.
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<dscConfigAzureBlobStorage xmlns="http://schemas.teradata.com/v2012/DSC">
<config_azure_blob_storage>
<!-- 'Storage account' - Required, max length 24, lowercase -->
<storage_account>azurerbuda</storage_account>
<!-- 'Storage account enumeration ' - Required, valid values: cool, hot -->
<storage_type>cool</storage_type>
<blobs>
<!--'Blob container name' - Required, max length 63, lowercase, at least one -->
<blob_container>udaesblob01</blob_container>
<prefix_list>
<!-- 'Prefix name' - Required, max length 256, at least one -->
<prefix_name>br186001-1</prefix_name>
<storage_devices>2</storage_devices>
</prefix_list>
</blobs>
<blobs>
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
129
Appendix D: Teradata DSA Commands
config_dd_boost
<blob_container>udaesblob02</blob_container>
<prefix_list>
<prefix_name>br186001-2</prefix_name>
<storage_devices>2</storage_devices>
</prefix_list>
</blobs>
</config_azure_blob_storage>
</dscConfigAzureBlobStorage>
config_dd_boost
Purpose
The config_dd_boost command configures the DD Boost server for the DSA repository.
Syntax
config_dd_boost -f|-file FILE
Example
dsc config_dd_boost -f configddboost.xml
Parameters
f|file File
The full path and name of the file containing the necessary configuration parameters.
Example: configDiskFileSystem.xml
u|user_authentication User
Required when security management is enabled. Supplies the command with the Viewpoint
user, and triggers a password prompt for authentication.
Usage Notes
When you use the config_dd_boost command, you will be prompted for the DD boost user name and
password. For example:
dsc config_dd_boost -f ..\samples\sample_config_dd_boost.xml
Data Stream Controller Command Line 15.10.00.00
Command parameters:
-file : ..\samples\sample_config_dd_boost.xml
Enter DD Boost user name:
dd_user
Enter DD Boost user password:
Connected to DSC version 15.10.00.00
Saving dd_boost server...
Saving dd_boost server name: dddomain1
Saving dd_boost server with server id: 1
130
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix D: Teradata DSA Commands
config_disk_file_system
XML File Example
The sample XML file below contains the required parameters to configure a DD Boost server.
<dscConfigDDBoost dscVersion="" xmlns="http://schemas.teradata.com/v2012/DSC"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://schemas.teradata.com/v2012/DSC DSC.xsd ">
<!-- 'third_party_server_name' Required, max characters 30, alphanumeric and
"-","_", first character must be alphanumeric
-->
<third_party_server_name>dddomain1</third_party_server_name>
<!-- 'ip_address' - Required, max characters 50, alphanumeric and
"-","_",".",":" -->
<ip_address>129.1.1.1</ip_address>
<!-- 'third_party_media_type' - Required, accepted values: dd_boost,
net_backup -->
<third_party_media_type>dd_boost</third_party_media_type>
<!-- storage_units' - Required (at least one) -->
<storage_units>
<!-- 'storage_unit_name' - Required, max characters 50, alphanumeric and
"-","_"
-->
<storage_unit_name>storage_unit1</storage_unit_name>
<!-- 'max_files' - Required, Integer, min value is 1-->
<max_files>4</max_files>
</storage_units>
<storage_units>
<storage_unit_name>storage_unit2</storage_unit_name>
<max_files>5</max_files>
</storage_units>
</dscConfigDDBoost>
config_disk_file_system
Purpose
The config_disk_file_system command configures the disk file system in the DSA repository.
Syntax
config_disk_file_system -f|-file FILE
Example
dsc config_disk_file_system -f configDiskFileSystem.xml
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
131
Appendix D: Teradata DSA Commands
config_disk_file_system
Parameters
f|file File
The full path and name of the file containing the necessary configuration parameters.
Example: configDiskFileSystem.xml
u|user_authentication User
Required when security management is enabled. Supplies the command with the Viewpoint
user, and triggers a password prompt for authentication.
Usage Notes
File system names must be unique, fully qualified path names that begin with a forward slash, for
example, /dev/mnt1.
File system names cannot differ by case alone. For example, both /dev/mnt1 and /dev/Mnt1 cannot be
configured.
XML File Example
The sample XML file below contains the required parameters to configure the disk file system. The
file_system parameter should use an absolute path, for example, /tmp/dsu_mnt.
<?xml version="1.0" encoding="UTF-8"?><dscConfigDiskFileSystem xmlns="http://
schemas.teradata.com/v2012/DSC" xmlns:xsi="http://www.w3.org/2001/XMLSchemainstance" xsi:schemaLocation="http://schemas.teradata.com/v2012/DSC DSC.xsd ">
<file_system_list>
<!-- 'file_system' - Required, max characters 4096 -->
<file_system>/tmp/barms1-1</file_system>
<!-- 'max_files' - Required, Linux max file descriptor,
streams for data domain system -->
<max_files>200</max_files>
</file_system_list>
<file_system_list>
<!-- 'file_system' - Required, max characters 4096, use
<file_system>/tmp/barms1-2</file_system>
<!-- 'max_files' - Required, Linux max file descriptor,
streams for data domain system -->
<max_files>6</max_files>
</file_system_list>
<file_system_list>
<!-- 'file_system' - Required, max characters 4096, use
<file_system>/tmp/barms1-3</file_system>
<!-- 'max_files' - Required, Linux max file descriptor,
streams for data domain system -->
<max_files>6</max_files>
</file_system_list>
<file_system_list>
<!-- 'file_system' - Required, max characters 4096, use
<file_system>/tmp/barms2-1</file_system>
<!-- 'max_files' - Required, Linux max file descriptor,
streams for data domain system -->
<max_files>200</max_files>
</file_system_list>
<file_system_list>
132
use max number of
absolute path
-->
use max number of
absolute path
-->
use max number of
absolute path
-->
use max number of
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix D: Teradata DSA Commands
config_general
<!-- 'file_system' - Required, max characters 4096, use
<file_system>/tmp/barms2-2</file_system>
<!-- 'max_files' - Required, Linux max file descriptor,
streams for data domain system -->
<max_files>6</max_files>
</file_system_list>
<file_system_list>
<!-- 'file_system' - Required, max characters 4096, use
<file_system>/tmp/barms3-1</file_system>
<!-- 'max_files' - Required, Linux max file descriptor,
streams for data domain system -->
<max_files>4</max_files>
</file_system_list>
</dscConfigDiskFileSystem>
absolute path
-->
use max number of
absolute path
-->
use max number of
config_general
Purpose
The config_general command configures the general settings, based on the information contained in the
parameters XML file.
Syntax
config_general -f|-file File
Example
dsc config_general -f configGeneral.xml
Parameters
f|file File
The full path and name of the file containing the necessary configuration parameters.
Example: configGeneral.xml
u|user_authentication User
Required when security management is enabled. Supplies the command with the Viewpoint
user, and triggers a password prompt for authentication.
Usage Notes
Use the config_general command to change the default general configuration values.
XML File Example
The sample XML file below specifies general configuration settings.
<?xml version="1.0" encoding="utf-8" ?>
<dscConfigGeneral dscVersion="dscVersion1" xmlns="http://schemas.teradata.com/
v2012/DSC">
<!-- 'warning_threshold' - Required, integer value. -->
<warning_threshold>10</warning_threshold>
<!-- 'threshold_units' - Required, accepted values: MB/GB -->
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
133
Appendix D: Teradata DSA Commands
config_media_servers
<threshold_units>MB</threshold_units>
<!-- 'is_encrypted' - Required, accepted values: Y/N -->
<is_encrypted>Y</is_encrypted>
<!-- 'dsc_logging_level' - Required, accepted values:
Error,Info,Debug,Warning -->
<dsc_logging_level>Error</dsc_logging_level>
<!-- 'barnc_logging_level' - Required, accepted values:
Error,Info,Debug,Warning -->
<barnc_logging_level>Error</barnc_logging_level>
<!-- 'delete_job_time' - Required, integer value. -->
<delete_job_time>40</delete_job_time>
<!-- 'is_delete_after' - Required, accepted values: Y/N -->
<is_delete_after>Y</is_delete_after>
</dscConfigGeneral>
config_media_servers
Purpose
The config_media_servers command configures the BAR media servers.
Syntax
config_media_servers -f|-file File
Example
dsc config_media_servers -f configMediaServers.xml
Parameters
f|file File
The full path and name of the file containing the necessary configuration parameters.
Example: configMediaServers.xml
u|user_authentication User
Required when security management is enabled. Supplies the command with the Viewpoint
user, and triggers a password prompt for authentication.
Usage Notes
You must issue this command before creating a target group.
The media_server_name can only use the following characters: 'A-Z', 'a-z', '0-9', '-', '_', and '.'
XML File Example
The sample XML file below specifies parameters to configure BAR media servers.
<?xml version="1.0" encoding="UTF-8" standalone="yes" ?>
<!--->
<dscConfigMediaServers dscVersion="dscVersion1" xmlns="http://
schemas.teradata.com/v2012/DSC" xmlns:xsi="http://www.w3.org/2001/
XMLSchema-instance" xsi:schemaLocation="DSC.xsd">
134
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix D: Teradata DSA Commands
config_nbu
<!-- Required, max characters 32, alphanumeric plus "-","_", ".",
first character can only be alphanumeric
-->
<media_server_name>media_server71</media_server_name>
<!-- Required, max characters 5, Value range 1-65535
-->
<port>2410</port>
<ip_info>
<!-- 'ip_address' - Required (atleast one) , Only supporting IPV4
and IPV6 standards. No DNS support.
-->
<ip_address>111.111.111.111</ip_address>
<!-- 'netmask' - Required (atleast one), Only supporting IPV4 and
IPV6 standards. No DNS support.
-->
<netmask>255.255.255.0</netmask>
</ip_info>
<ip_info>
<!-- 'ip_address' - Required (atleast one) , Only supporting IPV4
and IPV6 standards. No DNS support.
-->
<ip_address>222.222.222.222</ip_address>
<!-- 'netmask' - Required (atleast one), Only supporting IPV4 and
IPV6 standards. No DNS support.
-->
<netmask>255.255.255.0</netmask>
</ip_info>
</dscConfigMediaServers>
config_nbu
Purpose
The config_nbu command configures a DSA system to use Symantec NetBackup third-party software to
back up and restore data.
Syntax
config_nbu -f|-file File
Example
dsc config_nbu -f configNbu.xml
Parameters
f|file File
The full path and name of the file containing the necessary configuration parameters.
Example: nbu_config.xml
u|user_authentication User
Required when security management is enabled. Supplies the command with the Viewpoint
user, and triggers a password prompt for authentication.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
135
Appendix D: Teradata DSA Commands
config_nbu
Usage Notes
The policies you define are case-sensitive and must match the policies defined in NetBackup.
The component name can only use the following characters: 'A-Z', 'a-z', '0-9' and '_'.
XML File Example
The sample XML file below specifies parameters to configure NetBackup servers.
<?xml version="1.0" encoding="UTF-8" ?>
<dscConfigNbu xmlns="http://schemas.teradata.com/v2012/DSC" xmlns:xsi="http://
www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://
schemas.teradata.com/v2012/DSC DSC.xsd">
<!-- 'third_party_server_name' Required, max characters 30, alphanumeric and
"-","_"
-->
<third_party_server_name>remoteNbu1</third_party_server_name>
<!-- 'ip_address' - Required, max characters 50, alphanumeric and
"-","_",".",":" -->
<ip_address>99.23.23.555</ip_address>
<!-- 'third_party_media_type' - Required, accepted value: net_backup
-->
<third_party_media_type>net_backup</third_party_media_type>
<!-- 'policy_class' - Required (at least one) -->
<policy_class>
<!-- 'policy_class_name' - Required, max characters 128, alphanumeric and
"-","_" -->
<policy_class_name>policy711</policy_class_name>
<!-- 'storage_devices' - Required, max characters 3, numeric range between
1-999 -->
<storage_devices>200</storage_devices>
</policy_class>
<policy_class>
<policy_class_name>policy712</policy_class_name>
<storage_devices>100</storage_devices>
</policy_class>
<policy_class>
<policy_class_name>policy713</policy_class_name>
<storage_devices>100</storage_devices>
</policy_class>
<policy_class>
<policy_class_name>policy721</policy_class_name>
<storage_devices>100</storage_devices>
</policy_class>
<policy_class>
<policy_class_name>policy722</policy_class_name>
<storage_devices>100</storage_devices>
</policy_class>
</dscConfigNbu>
136
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix D: Teradata DSA Commands
config_repository_backup
config_repository_backup
Purpose
The config_repository_backup command provides the configuration information to back up the DSC
repository.
Syntax
config_repository_backup -f|-file File
Example
dsc config_repository_backup -f configRepositoryBackup.xml
Parameters
f|file File
The full path and name of the file containing the necessary configuration parameters.
Example: configRepositoryBackup.xml
u|user_authentication User
Required when security management is enabled. Supplies the command with the Viewpoint
user, and triggers a password prompt for authentication.
Usage Notes
The DSC repository contains all BAR information, including configuration and job definitions. It should be
backed up regularly to enable system recovery.
XML File Example
A sample XML file containing configuration and scheduling information for a repository backup is shown
below.
<dscRepositoryBackup dscVersion="dscVersion1" xmlns="http://
schemas.teradata.com/v2012/DSC">
<!-- 'target_name' - Required, max 32 characters -->
<target_name>SampleBackupRepoTargetGroup</target_name>
<!-- 'frequency_value' - Required, Value between 1-4, Defaults to 1 -->
<frequency_value>1</frequency_value>
<!-- 'day_selection' - Required, accepted values: Su, Mo, Tu, We, Th,
Fr, Sa -->
<day_selection>Sa,Su</day_selection>
<!-- 'start_time' - Required, Max characters 5, Values 1:00-12:00 -->
<start_time>12:00</start_time>
<!-- 'start_am_pm' - Required, accepted values: AM/PM -->
<start_am_pm>AM</start_am_pm>
</dscRepositoryBackup>
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
137
Appendix D: Teradata DSA Commands
config_systems
config_systems
Purpose
The config_systems command configures the DSC settings for the Teradata system and nodes used for
backup and restore jobs. The command also sets the selector in the targeted system for ActiveMQ.
Syntax
config_systems -f|-file File -s|skip_system_config SkipSystemConfiguration
Example
dsc config_systems -f configSystem.xml -s system
Parameters
f|file File
The full path and name of the file containing the necessary configuration parameters.
Example: configSystem.xml
s|skip_system_config SkipSystemConfiguration
[Optional] Flag indicating whether the systems and node configuration or selector setting is
skipped when you run the command. If this option is not specified, then both parts of the
configuration (system and selector) are run. Enter one of the following:
• system to avoid configuring Teradata systems and nodes
• selector to avoid setting the selector in the targeted system for ActiveMQ
Note:
If you choose the selector option with this parameter, restart Teradata Database using a
DSMain restart.
u|user_authentication User
Required when security management is enabled. Supplies the command with the Viewpoint
user, and triggers a password prompt for authentication.
Usage Notes
Item
Description
Component names
The component name can only use the following characters: 'A-Z', 'a-z', '0-9'
and '_'.
The first character of the component name cannot be '_'.
Hot Standby Nodes
Hot standby nodes should be included when configuring a system.
IP Addresses
IP addresses can be changed whether or not nodes are in use. If IP addresses
are not entered and IP addresses for the node exist in the repository, there will
be no change and nothing will be deleted. IP addresses need to be unique.
There are two validations:
• IP addresses in pass-in nodes need to be unique.
138
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix D: Teradata DSA Commands
config_systems
Item
Description
• If the IP address has been used by a different node in the repository, is in
the list of pass-in nodes, and has a new IP address, it is valid. If the node is
not in the list of pass-in nodes and the request comes from a portlet, it is
rejected as the same IP address cannot be used for multiple nodes. If the
request comes from the command line, it is valid, as the node in the
repository is deleted.
Allowing Incremental Jobs By setting the skipForceFull option in the XML file, you can run an
Based on Full or
incremental backup job that is based on a prior full or cumulative backup job
Cumulative Backup Jobs that completed with errors.
Completed with Errors
• If you set skipForceFull to false and the full or cumulative backup job
fails, aborts, or completes with errors, the next time the job is run, the job is
forced to be a full backup.
• If you set skipForceFull to true and the full or cumulative backup job fails
or aborts, the next time the job is run, the job is forced to be a full backup.
Jobs that complete with errors are handled the same as jobs that complete
successfully or with a warning.
By default, the skipForceFull option is false.
The skipForceFull option is set system-wide. You cannot use this option on a
single-job basis.
System Configuration
Attributes
The following XML file example shows some system configuration attributes.
System configuration is subject to the following restrictions and requirements.
Setting reset_node_limit to true overwrites node limits with the system's
hard and soft limits.
Soft and hard limits for insertion of new nodes:
• If node limits are specified, they are used.
• If node limits are not specified, the system limits are used.
• The soft limits cannot be greater than the hard limits.
Soft and hard limits when updating existing nodes:
• If the node limit is specified, the node limit is used.
• If a node limit is not specified and the system limit = system limit in
database, the node limit is used.
• If a node limit is not specified and system limit is not the same as system
limit in database, system limit is used.
XML File Example
A representative XML file containing system configuration information is shown below.
<?xml version="1.0" encoding="UTF-8" standalone="true"?>
-<dscConfigSystems xsi:schemaLocation="DSC.xsd" xmlns:xsi="http://www.w3.org/
2001/XMLSchema-instance" xmlns="http://schemas.teradata.com/v2012/DSC"> Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
139
Appendix D: Teradata DSA Commands
config_target_groups
<system>
<!-- 'system_name' - Required, max 32 characters -->
<system_name>system7</system_name>
<!-- 'tdpid' - Required (unless skipped by option)-->
<tdpid>system7</tdpid>
<!-- 'database_query_method' - Required, accepted values: BASE_VIEW/
EXTENDED_VIEW required -->
<database_query_method>BASE_VIEW</database_query_method>
<!-- 'streams_softlimit' - Required, number of streams per node per job -->
<streams_softlimit>20</streams_softlimit>
<!-- 'streams_hardlimit' - Required, max number of streams per node-->
<streams_hardlimit>20</streams_hardlimit>
<!-- 'reset_node_limit' - Optional, accepted values: true/false -->
<reset_node_limit>false</reset_node_limit>
<!-- 'node', Required (at least one) -->
-<node>
<!-- 'node_name', Required -->
<node_name>system7Node1</node_name>
<!-- 'ip_address' - Required (at least one)-->
<ip_address>229.0.0.1</ip_address> <ip_address>99.23.106.11</ip_address>
<!-- 'streams_softlimit' - Optional, number of streams per node for each job ->
<streams_softlimit>20</streams_softlimit>
<!-- 'streams_hardlimit' - Optional, max number of streams per node -->
<streams_hardlimit>20</streams_hardlimit> </node> -<node>
<node_name>system7Node2</node_name> <ip_address>99.23.110.12</ip_address>
<ip_address>229.0.0.2</ip_address> <streams_softlimit>20</streams_softlimit>
<streams_hardlimit>20</streams_hardlimit> </node>
<!-- 'skip_force_full' - Optional, accepted values: true/false -->
<skip_force_full>false</skip_force_full> </system> </dscConfigSystems>
config_target_groups
Purpose
The config_target_groups command configures the target groups based on the target type and the
information from the parameters file.
Syntax
config_target_groups -t|-type Target Type -f|-file File -B|-repository_backup -S|-skip_prompt
Example
dsc config_target_groups -t target_nbu -f TargetGroupParameters.xml -B -S
dsc config_target_groups -t remote_file_system -f test.xml
Parameters
t|type Type
The type of the BAR component to add to the target group. Target group types are
DUMMY, TARGET_NBU, REMOTE_FILE_SYSTEM, TARGET_DDBOOST, TARGET_S3
and TARGET_AZURE.
Example: target_nbu
140
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix D: Teradata DSA Commands
config_target_groups
f|file File
The full path and name of the file containing the necessary configuration parameters.
Example: TargetGroupParameters.xml
B|repository_backup Repository Backup
[Optional] Marks a target group for repository backup. Only valid for the TARGET_NBU,
REMOTE_FILE_SYSTEM, or TARGET_DDBOOST types.
S|Skip_prompt SkipPrompt
[Optional] Skips displaying a confirmation message before performing the command action.
u|user_authentication User
Required when security management is enabled. Supplies the command with the Viewpoint
user, and triggers a password prompt for authentication.
Usage Notes
When using a REMOTE_FILE_SYSTEM target group type, the following is true:
• If the path defined in <mounted_file_system> is the same among multiple media servers, the path
implies that the media server is a NFS mount that all the media servers can access the data written under
the same <mounted_file_system>.
• If one of the media servers is offline during a restore, analyze_read or query_backupids command, DSA
will automatically replace the media server with a media server that can access the same mounted file
system.
When using a TARGET_DDBOOST target group type, the following is true:
• If the values in <target_entity> and <storage_units> are the same among multiple media servers, the
values imply that these media servers can access the data written on the same <target_entity> and
<storage_units>.
• If one of the media servers is offline during a restore, analyze or query_backupids command, DSA will
automatically replace the media server with a media server that can access the same target entity and
storage units.
Before configuring a different type of target group, the media server must have the corresponding AXM
module installed. Use list_access_module to verify which AXM modules are installed.
The component name can only use the following characters: 'A-Z', 'a-z', '0-9' and '_'.
XML File Examples
A sample XML file to configure a remote media target group (target_nbu) is shown below.
<?xml version="1.0" encoding="UTF-8" standalone="yes" ?>
- <!--->
- <dscConfigTargetGroupsRemoteMedia dscVersion="dscVersion1" xmlns="http://
schemas.teradata.com/v2012/DSC">
- <!-- 'target_group_name' - Required, max 32 characters, first character
must be alphanumeric only.
-->
<target_group_name>SampleTargetGroup</target_group_name>
- <!-- 'is_enabled' Optional, accepted values: true/false. Default : false
-->
<is_enabled>true</is_enabled>
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
141
Appendix D: Teradata DSA Commands
config_target_groups
- <!-- 'target_entity' - Required, max 32 characters
-->
<target_entity>nbu52</target_entity>
- <!-- 'targets' - Required (at least one)
-->
- <targets>
- <!-- 'bar_media_server' - Required, max 32 characters
-->
<bar_media_server>SampleMediaServer1</bar_media_server>
- <!-- 'policy_class' - Required (at least one)
-->
- <policy_class>
- <!-- 'policy_class_name' - Required, max 32 characters. Policy must exist
on target_entity.
-->
<policy_class_name>policy75</policy_class_name>
- <!-- 'devices' - Required
-->
<devices>1</devices>
</policy_class>
- <policy_class>
<policy_class_name>policy76</policy_class_name>
<devices>1</devices>
</policy_class>
</targets>
- <targets>
<bar_media_server>SampleMediaServer2</bar_media_server>
- <policy_class>
<policy_class_name>policy75</policy_class_name>
<devices>1</devices>
</policy_class>
- <policy_class>
<policy_class_name>policy76</policy_class_name>
<devices>1</devices>
</policy_class>
</targets>
</dscConfigTargetGroupsRemoteMedia>
A sample XML file to configure a remote file system target group is shown below.
<?xml version="1.0" encoding="UTF-8" standalone="yes" ?>
- <dscConfigTargetGroupsRemoteFile xmlns="http://schemas.teradata.com/v2012/
DSC">
<target_group_name>t_remote_fs</target_group_name>
<is_enabled>true</is_enabled>
- <mounted_file_system_remote>
<bar_media_server>barms2</bar_media_server>
- <file_system>
<mounted_file_system>/tmp/jing_test</mounted_file_system>
<files>3</files>
</file_system>
</mounted_file_system_remote>
</dscConfigTargetGroupsRemoteFile>
142
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix D: Teradata DSA Commands
config_target_groups
A sample XML file to configure a DD Boost target group is shown below.
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<dscConfigTargetGroupsDDBoost dscVersion="dscVersion1" xmlns="http://
schemas.teradata.com/v2012/DSC">
<!-- 'target_group_name' - Required, max 32 characters, first character must
be alphanumeric only. -->
<target_group_name>dd_boost_tg1</target_group_name>
<!-- 'is_enabled' Optional, accepted values: true/false. Default : false -->
<is_enabled>true</is_enabled>
<!-- 'targets' - Required (at least one) -->
<targets>
<!-- 'target_entity' - Required, max 32 characters -->
<target_entity>dddomain1</target_entity>
<media_storage_units>
<!--media_storage_units' - Required (at least one) -->
<bar_media_server>barvm12</bar_media_server>
<!-- 'bar_media_server' - Required, max 32 characters -->
<!-- storage_units' - Required (at least one) -->
<storage_units>
<!-- 'storage_unit_name' - Required, max characters 50, alphanumeric and
"-","_" -->
<storage_unit_name>storage_unit1</storage_unit_name>
<!-- 'max_files' - Required, Integer, min value is 1-->
<files>40</files>
</storage_units>
</media_storage_units>
</targets>
<targets>
<!-- 'target_entity' - Required, max 32 characters -->
<target_entity>dddomain2</target_entity>
<media_storage_units>
<!--media_storage_units' - Required (at least one) -->
<bar_media_server>barvm13</bar_media_server>
<!-- 'bar_media_server' - Required, max 32 characters -->
<!-- storage_units' - Required (at least one) -->
<storage_units>
<!-- 'storage_unit_name' - Required, max characters 50, alphanumeric and
"-","_" -->
<storage_unit_name>storage_unit2</storage_unit_name>
<!-- 'max_files' - Required, Integer, min value is 1-->
<files>45</files>
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
143
Appendix D: Teradata DSA Commands
config_target_groups
</storage_units>
</media_storage_units>
</targets>
</dscConfigTargetGroupsDDBoost>
A sample XML file to configure an S3 target group is shown below:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<dscConfigTargetGroupsAmzS3 dscVersion="dscVersion1" xmlns="http://
schemas.teradata.com/v2012/DSC">
<!-- 'target_group_name' - Required, max 32 characters, first character must
be alphanumeric only. -->
<target_group_name>SampleTargetGroup</target_group_name>
<!-- 'is_enabled' Optional, accepted values: true/false. Default : false -->
<is_enabled>true</is_enabled>
<!-- 'aws_account_name'- Required, max 32 characters -->
<account_name>accountName</account_name>
<!-- 'region' - Required, max 30 characters -->
<region>us-east-1</region>
<!-- 'targets' - Required (at least one) -->
<targetMediaBuckets>
<!-- 'bar_media_server' - Required, max 32 characters -->
<bar_media_server>SampleMediaServer1</bar_media_server>
<!-- 'buckets' - Required (at least one) -->
<buckets>
<!-- 'bucket_name' - Required, max 512 characters. Bucket must exist
on the region -->
<bucket_name>bucket1</bucket_name>
<!-- 'prefix_list' - Required (at least one) -->
<prefix_list>
<!-- 'prefix_name' - Required, max 256 characters -->
<prefix_name>prefix1/prefix2/</prefix_name>
<!-- 'storage_devices' - Required -->
<storage_devices>1</storage_devices>
</prefix_list>
</buckets>
</targetMediaBuckets>
<targetMediaBuckets>
<!-- 'bar_media_server' - Required, max 32 characters -->
<bar_media_server>SampleMediaServer2</bar_media_server>
<!-- 'buckets' - Required (at least one) -->
<buckets>
<!-- 'bucket_name' - Required, max 512 characters. Bucket must exist
on the region -->
<bucket_name>bucket2</bucket_name>
<!-- 'prefix_list' - Required (at least one) -->
144
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix D: Teradata DSA Commands
config_target_group_map
<prefix_list>
<!-- 'prefix_name' - Required, max 256 characters -->
<prefix_name>prefix1/prefix2/</prefix_name>
<!-- 'storage_devices' - Required -->
<storage_devices>1</storage_devices>
</prefix_list>
</buckets>
</targetMediaBuckets>
</dscConfigTargetGroupsAmzS3>
A sample XML file to configure an Azure Blob storage target group is shown below:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<dscConfigTargetGroupsAzureBlobStorage xmlns="http://schemas.teradata.com/
v2012/DSC">
<target_group_name>azure_tg</target_group_name>
<is_enabled>true</is_enabled>
<storage_account>azure_storage_account</storage_account>
<storage_type>cool</storage_type>
<targetMediaBlob>
<bar_media_server>mediaserver1</bar_media_server>
<blobs>
<blob_container>blob1</blob_container>
<prefix_list>
<prefix_name>prefix1</prefix_name>
<storage_devices>100</storage_devices>
</prefix_list>
<prefix_list>
<prefix_name>prefix2</prefix_name>
<storage_devices>50</storage_devices>
</prefix_list>
</blobs>
</targetMediaBlob>
<targetMediaBlob>
<bar_media_server>mediaserver2</bar_media_server>
<blobs>
<blob_container>blob2</blob_container>
<prefix_list>
<prefix_name>prefix3</prefix_name>
<storage_devices>60</storage_devices>
</prefix_list>
</blobs>
</targetMediaBlob>
</dscConfigTargetGroupsAzureBlobStorage>
config_target_group_map
Purpose
The config_target_group_map command configures the map between target groups when restoring to
a different client configuration.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
145
Appendix D: Teradata DSA Commands
config_target_group_map
Syntax
config_target_group_map -f|-file Name
Example
dsc config_target_group_map -f target_group_mapping.xml
Parameter
f|file File
The full file path and file name of the file containing the target group mapping parameters
target_group_mapping.xml
u|user_authentication User
Required when security management is enabled. Supplies the command with the Viewpoint
user, and triggers a password prompt for authentication.
V|virtual
[Optional] Indicates this is a mapping from a virtual target group to a physical target group.
Usage Notes
All backup (source) and restore (destination) sub-targets (media server/policy name pairs) must have a
mapping in the XML file in order to succeed.
Any changes to either target group using the config_target_groups command could disable the
mappings attached to the specific target. If the new target group configuration adds or removes sub-targets,
it will disable all mappings associated with the target group. To re-enable the mappings, reconfigure that
mapping reflecting the target group changes using the config_target_group_map command.
XML File Example
A representative XML file containing target group mapping is shown below. Note the following:
•
•
•
•
A master_source_target_name is required
A master_dest_target_name is required
In the list of target_group_maps, at least one is needed
The master_source_dsc_id is optional if mapping between two physical target groups, but required
if mapping between a virtual target and a physical target
<?xml version="1.0" encoding="UTF-8" standalone="yes" ?>
<dscConfigTargetGroupMap xmlns="http://schemas.teradata.com/v2012/DSC">
<master_source_target_name>hawaii2_tg</master_source_target_name>
<master_source_dsc_id>hawaii2</master_source_dsc_id>
<master_dest_target_name>remote_nbu</master_dest_target_name>
<target_group_maps>
<source_mediaserver_name>hawaii2</source_mediaserver_name>
<source_policy_class_name>policy_h2</source_policy_class_name>
<dest_mediaserver_name>barms1.teradata.com</
dest_mediaserver_name>
<dest_policy_class_name>policy1</dest_policy_class_name>
</target_group_maps>
<target_group_maps>
<source_mediaserver_name>hawaii2</source_mediaserver_name>
<source_policy_class_name>policy_h2</source_policy_class_name>
<dest_mediaserver_name>barms3</dest_mediaserver_name>
146
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix D: Teradata DSA Commands
consolidate_job_logs
<dest_policy_class_name>policy3</dest_policy_class_name>
</target_group_maps>
</dscConfigTargetGroupMap>
consolidate_job_logs
Purpose
The consolidate_job_logs command uploads all logs for a completed job to a centralized location. If
the job is running, the command is rejected.
Syntax
consolidate_job_logs -n|-name Name -I|-job_execution_id Job Execution ID
Example
dsc consolidate_job_logs -n job1 -I 5
Parameters
n|name Name
The name of the job for which you want to retrieve status.
I|job_execution_id Job Execution ID
[Optional] The job ID of the job execution for which you want to retrieve status. You cannot
specify a running job. If you do not specify a job execution ID, the latest job execution ID is
used.
Usage Notes
The job logs that consolidate_job_logs gathers are placed in a subdirectory,
jobName_jobExecutionId_currentTimestamp, that is created under the upload directory defined in the
properties file...... "It copies all dsc logfiles has last modified timestamp after job start time to newly created
directory".
Note:
Delete old files in the upload directory periodically, or disk file space will be affected.
XML File Example
This command does not support an XML file.
create_job
Purpose
The create_job command creates a job based on the values you specify for parameters in the command
line or in the XML file. Parameter values you enter in the command line supersede any value you enter for
those parameters in the parameters XML file.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
147
Appendix D: Teradata DSA Commands
create_job
Syntax
create_job [ -file File | parameters]
Example
dsc create_job -n job1 -f parameters.xml
Parameters
n|name Name
[Optional] The name of the job on which to perform the action. Must be unique for each
job.
Example: job1
f|file File
The full file path and file name of the file containing the necessary parameters to create the
job. If the same parameters are provided both in the file and on the command line, Teradata
DSA uses the values specified in the command line.
Example: parameters.xml
d|description Description
[Optional] A meaningful description of the job. To allow a multi-word description, add \"
before and after the description string.
Example: \"backup web apps\"
t|type Type
[Optional] The type of job. Enter one of the following:
•
•
•
•
BACKUP
RESTORE
ANALYZE_READ
ANALYZE_VALIDATE
Example: restore
o|owner Owner
[This parameter is deprecated after release 15.00. You will receive a warning message if you
use this parameter.] The owner of the job. Job ownership is used to determine the
appropriate privileges given to DSA users.
Example: user1
b|backup_name BackupName
[Optional] An existing backup job name. For restore, analyze_read, and analyze_validate
jobs only.
Example: backupWeb1
u|user_authentication User
Required when security management is enabled. Supplies the command with the Viewpoint
user, and triggers a password prompt for authentication.
148
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix D: Teradata DSA Commands
create_job
v|backup_version BackupVersion
[Optional] Backup version number. For restore, analyze_read, and analyze_validate jobs
only. Enter latest or 0 for the latest save set.
Example: 2
Usage Notes
To get the backup version of a save set, use the list_save_sets command.
You can set the skip_archive parameter to true in case access rights for objects are missing. By setting the
parameter to true, the job continues to run. The default value for skip_archive is false. The default value
for existing jobs should be false and can be set during the upgrade procedure.
In a restore job, the parent_name parameter uses the backup job name when exporting the XML.
XML File Examples
The sample XML file below specifies job parameters for a remote backup job.
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<dscCreateJob
xmlns="http://schemas.teradata.com/v2012/DSC"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="DSC.xsd">
<!-- Optional -->
<job_instance>
<!-- 'job_name' - Required, max 128 characters.-->
<!-- Note: Can be omitted in xml to be specified via command line. -->
<job_name>SampleRemoteBackup1</job_name>
<!-- 'job_description' - Required, max 256 characters. -->
<!-- Note: Can be omitted in xml to be specified via command line. -->
<job_description>This is a sample remote backup job.</job_description>
<!-- 'job_type' - Required, accepted values:
BACKUP,RESTORE,ANALYZE_VALIDATE,ANALYZE_READ -->
<!-- Note: Can be omitted in xml to be specified via command line. -->
<job_type>BACKUP</job_type>
<!-- 'auto_retire' - Optional, accepted values: true/false -->
<auto_retire>true</auto_retire>
<!-- 'retire_value' - Required (if auto_retire is true) -->
<retire_value>15</retire_value>
<!-- 'retire_units' - Required (if auto_retire is true), accepted values:
DAYS,WEEKS -->
<retire_units>DAYS</retire_units>
<!-- 'objectlist' - Required for BACKUP/RESTORE/ANALYZE_VALIDATE -->
<!-- Note: 'objectlist' is NOT permitted for ANALYZE_READ jobs -->
<objectlist>
<objectinfo>
<!-- 'object_name' - Required, max 128 characters -->
<object_name>BAR1</object_name>
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
149
Appendix D: Teradata DSA Commands
create_job
<!-- 'object_type' - Required, accepted values: AGGREGATE_FUNCTION,
AUTHORIZATION, COMBINED_AGGREGATE_FUNCTIONS, DATABASE,
EXTERNAL_PROCEDURE, GLOP_SET, HASH_INDEX, INSTANCE_OR_CONSTRUCTOR_METHOD, JAR,
JOIN_INDEX, JOURNAL, MACRO,
NO_PI_TABLE, ORDERED_ANALYTICAL_FUNCTION, QUEUE_TABLE, STANDARD_FUNCTION,
STATISTICAL_FUNCTION, STORED_PROCEDURE,
TABLE, TABLE_FUNCTION, TRIGGER, USER, USER_DEFINED_METHOD,
USER_DEFINED_DATA_TYPE, VIEW -->
<object_type>VIEW</object_type>
<!-- 'parent_name' - Optional, max 128 characters -->
<parent_name>SYSTEMFE</parent_name>
<!-- 'parent_type' - Optional, accepted values: DATABASE/USER -->
<parent_type>DATABASE</parent_type>
</objectinfo>
<objectinfo>
<object_name>ABC</object_name>
<object_type>DATABASE</object_type>
<!-- 'exclude' - Optional -->
<exclude>
<excludeobjectinfo>
<object_name>T1</object_name>
<object_type>TABLE</object_type>
</excludeobjectinfo>
<excludeobjectinfo>
<object_name>T2</object_name>
<object_type>TABLE</object_type>
</excludeobjectinfo>
</exclude>
</objectinfo>
<objectinfo>
<object_name>BAR2</object_name>
<object_type>VIEW</object_type>
<parent_name>ABC</parent_name>
<parent_type>DATABASE</parent_type>
</objectinfo>
</objectlist>
</job_instance>
<!-- 'source_tdpid' - Required, max 32 characters -->
<source_tdpid>system1</source_tdpid>
<!-- 'source_accountid' - Optional, max 30 characters -->
<source_accountid>acctid</source_accountid>
<!-- 'target_media' - Required, max 32 characters -->
<target_media>remote1_nbu</target_media>
<!-- 'job_options' - Required -->
<job_options>
<!-- 'online'- Optional, accepted values: true/false -->
<online>false</online>
<!-- 'nosync' - Optional, accepted values: true/false -->
150
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix D: Teradata DSA Commands
create_job
<nosync>false</nosync>
<!-- 'data_phase' - Required, accepted values: DATA,DICTIONARY -->
<data_phase>DATA</data_phase>
<!-- 'enable_temperature_override' - Optional, accepted values: true/false -->
<enable_temperature_override>false</enable_temperature_override>
<!-- 'temperature_override' - Optional, accepted values: DEFAULT,HOT,WARM,COLD
-->
<temperature_override>DEFAULT</temperature_override>
<!-- 'block_level_compression' - Optional, accepted values: DEFAULT,ON,OFF -->
<block_level_compression>DEFAULT</block_level_compression>
<!-- 'disable_fallback' - Optional, accepted value: false -->
<disable_fallback>false</disable_fallback>
<!-- 'query_band' - Optional, max 2048 characters -->
<query_band>queryBandTest</query_band>
<!-- 'dsmain_logging_level' - Optional, accepted values:
Error,Info,Debug,Warning -->
<dsmain_logging_level>Error</dsmain_logging_level>
<!-- 'nowait' - Optional, accepted values: true/false -->
<nowait>false</nowait>
<!-- skip_archive -Optional can be true/false. Skip archive during the backup
job in case error in access rights -->
<skip_archive>true</skip_archive>
</job_options>
</dscCreateJob>
The sample XML file below specifies job parameters for a remote restore job.
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<dscCreateJob
xmlns="http://schemas.teradata.com/v2012/DSC"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="DSC.xsd">
<!-Optional -->
<job_instance>
<!-- 'job_name' - Required, max 128 characters.-->
<!-- Note: Can be omitted in xml to be specified via command line. -->
<job_name>SampleRemoteRestore1</job_name>
<!-- 'job_description' - Required, max 256 characters. -->
<!-- Note: Can be omitted in xml to be specified via command line. -->
<job_description>This is a sample remote restore job.</job_description>
<!-- 'job_type' - Required, accepted values:
BACKUP,RESTORE,ANALYZE_VALIDATE,ANALYZE_READ -->
<!-- Note: Can be omitted in xml to be specified via command line. -->
<job_type>RESTORE</job_type>
<!-- 'auto_retire' - Optional, accepted values: true/false
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
-->
151
Appendix D: Teradata DSA Commands
create_job
<auto_retire>true</auto_retire>
<!-- 'retire_value' - Required (if auto_retire is true) -->
<retire_value>15</retire_value>
<!-- 'retire_units' - Required (if auto_retire is true), accepted
values: DAYS,WEEKS -->
<retire_units>DAYS</retire_units>
<!-- 'objectlist' - Required for BACKUP/RESTORE/ANALYZE_VALIDATE -->
<!-- Note: 'objectlist' is NOT permitted for ANALYZE_READ jobs -->
<objectlist>
<objectinfo>
<!-- 'object_name' - Required, max 128 characters -->
<object_name>BAR1</object_name>
<!-- 'object_type' - Required, accepted values:
AGGREGATE_FUNCTION, AUTHORIZATION,
COMBINED_AGGREGATE_FUNCTIONS, DATABASE, EXTERNAL_PROCEDURE,
GLOP_SET, HASH_INDEX,
INSTANCE_OR_CONSTRUCTOR_METHOD, JAR, JOIN_INDEX, JOURNAL,
MACRO, NO_PI_TABLE,
ORDERED_ANALYTICAL_FUNCTION, QUEUE_TABLE, STANDARD_FUNCTION,
STATISTICAL_FUNCTION,
STORED_PROCEDURE, TABLE, TABLE_FUNCTION, TRIGGER, USER,
USER_DEFINED_METHOD,
USER_DEFINED_DATA_TYPE, VIEW -->
<object_type>VIEW</object_type>
<!-- 'parent_name' - Optional, max 128 characters -->
<parent_name>SYSTEMFE</parent_name>
-->
<!-- 'parent_type' - Optional, accepted values: DATABASE/USER
<parent_type>DATABASE</parent_type>
</objectinfo>
<objectinfo>
<object_name>ABC</object_name>
<object_type>DATABASE</object_type>
<!-- 'exclude' - Optional -->
<exclude>
<excludeobjectinfo>
<object_name>T1</object_name>
<object_type>TABLE</object_type>
</excludeobjectinfo>
<excludeobjectinfo>
<object_name>T2</object_name>
<object_type>TABLE</object_type>
</excludeobjectinfo>
</exclude>
</objectinfo>
<objectinfo>
<object_name>BAR2</object_name>
<object_type>VIEW</object_type>
<parent_name>ABC</parent_name>
<parent_type>DATABASE</parent_type>
152
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix D: Teradata DSA Commands
create_job
<!-- 'object_attribute_list' Optional
-->
<object_attribute_list>
<!-- 'map_to' - Optional, max 128 characters -->
<map_to>DBC2</map_to>
<!-- 'rename_to' - Optional, max 128 characters -->
<rename_to>BAR2</rename_to>
</object_attribute_list>
</objectinfo>
</objectlist>
<!-- 'backup_name' - Required for
RESTORE,ANALYZE_READ,ANALYZE_VALIDATE jobs. -->
<backup_name>SampleRemoteBackup1</backup_name>
<!-- 'backup_version' - Required for
RESTORE,ANALYZE_READ,ANALYZE_VALIDATE jobs. -->
<!-- Note: Value = '0' the latest save set from backup. Value > 0 is
an existing save set.-->
<backup_version>0</backup_version>
</job_instance>
<!-- 'source_media' - Required, max 32 characters -->
<source_media>remote1_nbu</source_media>
<!-- 'target_tdpid' - Required, max 32 characters -->
<target_tdpid>system1</target_tdpid>
<!-- 'target_accountid' - Optional, max 30 characters -->
<target_accountid>acctid</target_accountid>
<!-- 'job_options' - Required -->
<job_options>
<!-- 'enable_temperature_override' - Optional, accepted values: true/
false -->
<enable_temperature_override>false</enable_temperature_override>
<!-- 'temperature_override' - Optional, accepted values:
DEFAULT,HOT,WARM,COLD -->
<temperature_override>DEFAULT</temperature_override>
<!-- 'block_level_compression' - Optional, accepted values:
DEFAULT,ON,OFF -->
<block_level_compression>DEFAULT</block_level_compression>
<!-- 'disable_fallback' - Optional, accepted values: Optional,
accepted value: false -->
<disable_fallback>false</disable_fallback>
<!-- 'query_band' - Optional, max 2048 characters -->
<query_band>queryBandTest</query_band>
<!-- 'dsmain_logging_level' - Optional, accepted values:
Error,Info,Debug,Warning -->
<dsmain_logging_level>Error</dsmain_logging_level>
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
153
Appendix D: Teradata DSA Commands
ddls
<!-'nowait' - Optional, accepted values: true/false -->
<nowait>true</nowait>
<!-- 'reblock' - Optional, accepted values: true/false
<!-- Note: Only useful for RESTORE jobs.-->
<reblock>false</reblock>
-->
<!-- 'run_as_copy' - Optional, accepted values: true/false -->
<!-- Note: Only useful for RESTORE jobs.-->
<run_as_copy>false</run_as_copy>
<!-- 'savset_user' - Optional, max 32 characters -->
<!-- Note: Only for RESTORE and ANALYZE_VALIDATE jobs -->
<saveset_user>admin</saveset_user>
<!-- 'savset_password' - Optional, max 32 characters -->
<!-- Note: Only for RESTORE and ANALYZE_VALIDATE jobs -->
<saveset_password>admin</saveset_password>
<!-- 'savset_accountid' - Optional, max 30 characters -->
<!-- Note: Only for RESTORE and ANALYZE_VALIDATE jobs -->
<saveset_accountid>acctid</saveset_accountid>
</job_options>
</dscCreateJob>
ddls
Purpose
The ddls command lists the files in a storage unit on a Data Domain device. This executable is provided by
the AXMDDboost package, and is not run using the DSC command-line utility.
Syntax
ddls -h|--host Hostname -s|--storageunit Storage Unit -u|--user User -l|--long pattern
Examples
ddls -h ddserver -s ddstorage -u dduser -l
Parameters
h|host Hostname
The hostname or IP address of the Data Domain server to which to connect.
s|storageunit Storage Unit
The storage unit on the Data Domain server to query.
u|user User
The user account to use for logging onto the Data Domain server.
l|long
[Optional] Displays file listing in long format.
154
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix D: Teradata DSA Commands
ddrm
pattern
[Optional] The pattern field is a wildcard for specifying which file names should be listed. If
not specified, all files within the selected storage unit will be listed. Enclose the wildcard file
names in double quotes so that the Linux shell does not expand the wildcard prior to
starting the utility.
Usage Notes
If you do not specify the host, storageunit, or user parameters, you will be prompted for them. You will also
be prompted for the logon password.
XML File Example
This command does not support an XML file.
ddrm
Purpose
The ddrm command deletes files from a storage unit on a Data Domain device. This executable is provided
by the AXMDDboost package, and is not run using the DSC command-line utility.
Syntax
ddrm -h|--host Hostname-s|--storageunit Storage Unit -u|--user User-f|--force pattern
Examples
ddrm -h ddserver -s ddstorage -u dduser "backupjob_1234_*"
Parameters
h|host Hostname
The hostname or IP address of the Data Domain server to which to connect.
s|storageunit Storage Unit
The storage unit on the Data Domain server to query.
u|user User
The user account to use for logging onto the Data Domain server.
-f|--force
[Optional] Removes the files without prompting the user for confirmation.
pattern
The pattern field is a required field that specifies which file names should be deleted. You
can provide wildcards; all files matching the wildcard are removed. Enclose the wildcard file
names in double quotes so that the Linux shell does not expand the wildcard prior to
starting the utility.
Usage Notes
If you do not specify the host, storageunit, or user parameters, you will be prompted for them. You will also
be prompted for the logon password.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
155
Appendix D: Teradata DSA Commands
delete_component
If the force option is not used, the list of files matching the specified names or wildcards will be displayed
prior to deletion. You will be prompted to confirm the file deletions by entering yes to the confirmation
prompt (the full word is required), or no to abort the deletion.
XML File Example
This command does not support an XML file.
delete_component
Purpose
The delete_component command deletes an existing component based on the information in the
parameters.
Syntax
dsc delete_component [parameters]
Example
dsc delete_component -n system1 -t system -S
Parameters
n|name Name
The name of the BAR component. The name must be unique. If the system name begins
with a special character such as a dash - , press Enter after entering the command, then
enter the system name with the special character at the Enter the System Name
prompt. For example, type dsc export_config -t system -f book.xm -n , press
Enter , then type -testA if -testA is the system name.
t|type Type
The type of BAR component. Enter one of the following:
•
•
•
•
•
•
•
•
•
system
node
media_server
nbu_server
dd_boost
disk_file_system
target_group
azure_app
aws_app
Example: node
s|system System
This value must be specified if the component type is a node.
Example: system1
S|Skip_prompt SkipPrompt
[Optional] Skips displaying a confirmation message before performing the command action.
156
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix D: Teradata DSA Commands
delete_job
u|user_authentication User
Required when security management is enabled. Supplies the command with the Viewpoint
user, and triggers a password prompt for authentication.
Usage Notes
To use the delete_component command, you must specify the component name and type, as well as the
system if you are deleting a node. You do not need to specify the component name for disk_file_system
types.
You can delete a system, node, media server, NetBackup server, disk file system, or a target group, except
under the following conditions:
•
•
•
•
•
•
•
•
The system marked for repository backup
A system in use by a job
A media server in use by a target group
A NetBackup server in use by a target group
A target group in use by a job
A target group in use by a target group map
A target group in use for repository backups
A policy used by a target group
XML File Example
This command does not support an XML file.
delete_job
Purpose
The delete_job command deletes a DSA job and any data associated with it from the DSC repository.
Any logs and job history are deleted and cannot be restored. Any backup save sets created for the job that
exist on devices managed by third-party solutions must be deleted manually using the interface for that
solution.
Syntax
dsc delete_job -n|-name Name
Example
dsc delete_job -n job1 -S
Parameters
n|name Name
The name of the job on which to perform the action. Must be unique for each job.
Example: job1
S|Skip_prompt SkipPrompt
[Optional] Skips displaying a confirmation message before performing the command action.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
157
Appendix D: Teradata DSA Commands
delete_target_group_map
u|user_authentication User
Required when security management is enabled. Supplies the command with the Viewpoint
user, and triggers a password prompt for authentication.
Usage Notes
Generally, to delete a backup job, you must retire it first. There is one exception: you can also delete an active
backup job if its status is New.
A backup job cannot be deleted if there are restore or analyze jobs associated with it.
XML File Example
This command does not support an XML file.
delete_target_group_map
Purpose
The delete_target_group_map command deletes a target group map for restoring to a different client
configuration.
Syntax
delete_target_group_map -s|-source SampSrc -d|-destination SampDest
Example
dsc delete_target_group_map -s SampSrc -d SampDest
Parameters
s|source SampSrc
The source target group represents the backup target group for mapping.
Example: SampSrc
d|destination SampDest
The destination target group represents the restore target group for mapping.
Required if the virtual target group (parameter V) is specified. It indicates the physical target
group to which the virtual target group is mapped.
Example: SampDest
V|virtual
[Optional] Indicates this is a mapping from a virtual target group to a physical target group.
I | dsc_id SampleDscId
[Optional] The name or dscId of the source DSC environment. Required if -V is specified.
u|user_authentication User
Required when security management is enabled. Supplies the command with the Viewpoint
user, and triggers a password prompt for authentication.
158
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix D: Teradata DSA Commands
disable_component
XML File Example
This command does not support an XML file.
disable_component
Purpose
The disable_component command disables an existing BAR component based on the component name
and type.
Syntax
dsc disable_component -n|-name Name -t|-type Type
Example
dsc disable_component -n system1 -t system
Parameters
n|name Name
The name of the BAR component. The name must be unique. If the system name begins
with a special character such as a dash - , press Enter after entering the command, then
enter the system name with the special character at the Enter the System Name
prompt. For example, type dsc export_config -t system -f book.xm -n , press
Enter , then type -testA if -testA is the system name.
Example: system1
t|type Type
The type of BAR component. Enter system or target_group.
Example: SYSTEM
u|user_authentication User
Required when security management is enabled. Supplies the command with the Viewpoint
user, and triggers a password prompt for authentication.
Usage Notes
You cannot disable a component when it is in use.
XML File Example
This command does not support an XML file.
enable_component
Purpose
The enable_component command enables an existing BAR component based on the component name
and type.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
159
Appendix D: Teradata DSA Commands
export_config
Syntax
enable_component -n|-name Name -t|-type Type
Example
dsc enable_component -n system1 -t system
Parameters
n|name Name
The name of the BAR component. The name must be unique. If the system name begins
with a special character such as a dash - , press Enter after entering the command, then
enter the system name with the special character at the Enter the System Name
prompt. For example, type dsc export_config -t system -f book.xm -n , press
Enter , then type -testA if -testA is the system name.
Example: system1
t|type Type
The type of BAR component. Enter system or target_group.
Example: SYSTEM
u|user_authentication User
Required when security management is enabled. Supplies the command with the Viewpoint
user, and triggers a password prompt for authentication.
Usage Notes
You must enable a system or target group before running a job that uses it.
XML File Example
This command does not support an XML file.
export_config
Purpose
The export_config command exports the current XML definition for the requested BAR component.
Syntax
export_config -n|-name Name -t|-type Type -f|-file File
Examples
export_config -n component1 -t system -f System1Config.xml
export_config -t general -f GeneralConfig.xml
160
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix D: Teradata DSA Commands
export_job
Parameters
n|name Name
The name of the component you want to export. This parameter is optional for the
general, repository_backup, and disk_file_system types. If the system name
begins with the special character -, for example -test , for system type use the Enter
the System Name prompt.
Example: component1
t|type Type
The type of component or repository backup you want to export. Enter one of the following:
•
•
•
•
•
•
•
•
system
media_server
nbu_server
dd_boost
repository_backup
disk_file_system
target_group
general
Example: system
f|file File
The full file path and file name of the file to which to write the XML definition.
Example: System1Config.xml
Usage Notes
This command can be used with the configuration commands to update a BAR configuration.
XML File Example
This command does not require an XML file as input. You must supply a file name and location to which
the XML file results are exported as output.
export_job
Purpose
The export_job command exports the current XML definition for the requested job.
Syntax
export_job -n|-name Name -f|-file File
Example
export_job -n job1 -f job1Definition.xml
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
161
Appendix D: Teradata DSA Commands
export_job_metadata
Parameters
n|name Name
The name of the job on which to perform the action. Must be unique for each job.
Example: job1
f|file File
The full file path and file name of the file to which to write the XML definition.
Example: job1Definition.xml
Usage Notes
The XML file for a retired job cannot be exported using the export_job command.
The XML file that is exported by using this command can be used in the update_job command to create a
job.
XML File Example
This command does not require an XML file as input. You must supply a file name and location to which
the XML file results are exported as output.
export_job_metadata
Purpose
The export_job_metadata command exports metadata of a job (job definition, save sets, and targets)
based on the requested backup version. In the case of a disaster to the DSC repository, exporting and then
importing job metadata enables job migration and restoration to a different DSA environment.
The data must be exported in the following order:
1. Targets
2. Job definition
3. Save sets
Syntax
export_job_metadata -n|-name Name -d|-directory DirectoryPath -v|-backup_version BackupVersion -t|type Type -u|-user_authentication User
Example
dsc export_job_metadata -n job1 -v latest
Parameters
n|name Name
The name of the job on which to perform the action. Must be unique for each job.
Example: job1
162
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix D: Teradata DSA Commands
export_repository_backup_config
d|directory DirectoryPath
[Optional] Directory where the files are exported to or imported from.
Example: var/opt/dsa
v|backup_version BackupVersion
[Optional] Backup version number. Enter latest or 0 for the latest save set.
Example: latest
t|type Type
[Optional] The type of job metadata. You may enter JOB (for the job definition), SAVESET ,
or TARGET. If no value is specified, all three types of metadata are included.
Example: JOB
u|user_authentication User
Required when security management is enabled. Supplies the command with the Viewpoint
user, and triggers a password prompt for authentication.
Usage Notes
When the export_job_metadata or import_job_metadata command is running, no other operations
or commands can run at the same time. In addition, export_job_metadata and
import_job_metadata cannot run if an operational job or repository job is running.
XML File Example
This command does not support an XML file.
export_repository_backup_config
Purpose
The export_repository_backup_config command exports all configurations associated with setting
up a repository backup job. This includes the system, NetBackup, media servers, and target group associated
with the target selected in config_repository_backup.
The resulting repository configuration file should be used if the DSC repository needs to be restored from a
disaster recovery.
Syntax
export_repository_backup_config -f|-file File
Example
export_repository_backup_config -f ConfigRepositoryBackup.xml
Parameters
f|file File
The full file path and file name of the file to which to write the XML definition.
Example: configRepositoryBackup.xml
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
163
Appendix D: Teradata DSA Commands
export_target_group_map
Usage Notes
It is recommended that you run this command after configuring the backup repository and after any updates
to the repository.
XML File Example
This command does not require an XML file as input. You must supply a file name and location to which
the XML file results are exported as output.
export_target_group_map
Purpose
The export_target_group_map command exports a map between target groups for restoring to a
different client configuration.
Syntax
export_target_group_map -s|-source SampSrc -d|-destination SampDest -f|-file Name
Example
dsc export_target_group_map -s SampSrc -d SampDest -f TargetGroupMap.xml
Parameters
s|source SampSrc
The source target group represents the backup target group for mapping.
Example: SampSrc
d|destination SampDest
The destination target group represents the restore target group for mapping.
Required if the virtual target group (parameter V) is specified. It indicates the physical target
group to which the virtual target group is mapped.
Example: SampDest
f|file File
The full file path and file name of the file to which to write the XML definition.
TargetGroupMap.xml
V|virtual
[Optional] Indicates this is a mapping from a virtual target group to a physical target group.
I | dsc_id SampleDscId
[Optional] The name or dscId of the source DSC environment. Required if -V is specified.
XML File Example
This command does not require an XML file as input. You must supply a file name and location to which
the XML file results are exported as output.
164
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix D: Teradata DSA Commands
import_job_metadata
import_job_metadata
Purpose
The import_job_metadata command imports metadata of a job (job definition, save sets, and targets) to
the specified directory. In the case of a disaster to the DSC repository, exporting and then importing job
metadata enables job migration and restoration to a different DSA environment.
The data must be imported in the following order:
1. Targets
2. Job definition
3. Save sets
Syntax
import_job_metadata -n|-name Name -d|-directory DirectoryPath -v|-backup_version BackupVersion -t|type Type -u|-user_authentication User
Example
dsc import_job_metadata -n job1 -v latest
Parameters
n|name Name
The name of the job on which to perform the action. Must be unique for each job.
Example: job1
d|directory DirectoryPath
[Optional] Directory where the files are exported to or imported from.
Example: var/opt/dsa
v|backup_version BackupVersion
[Optional] Backup version number. Enter latest or 0 for the latest save set.
Example: latest
t|type Type
[Optional] The type of job metadata. You may enter JOB (for the job definition), SAVESET ,
or TARGET. If no value is specified, all three types of metadata are included.
Example: JOB
u|user_authentication User
Required when security management is enabled. Supplies the command with the Viewpoint
user, and triggers a password prompt for authentication.
Usage Notes
When the export_job_metadata or import_job_metadata command is running, no other operations
or commands can run at the same time. In addition, export_job_metadata and
import_job_metadata cannot run if an operational job or repository job is running.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
165
Appendix D: Teradata DSA Commands
import_repository_backup_config
XML File Example
This command does not support an XML file.
import_repository_backup_config
Purpose
The import_repository_backup_config command imports all configurations associated with setting
up a repository backup job. This includes system, NetBackup, media servers, and target group
configurations. This command is used to recover the DSC backup repository after a disaster.
Syntax
import_repository_backup_config -f|-file File
Example
import_repository_backup_config -f ConfigRepositoryBackup.xml
Parameters
-f|-file File
The file to import. This file is created by the export_repository_backup_config
command.
Example: configRepositoryBackup.xml
u|user_authentication User
Required when security management is enabled. Supplies the command with the Viewpoint
user, and triggers a password prompt for authentication.
Usage Notes
You must run the export_repository_backup_config command before running the
import_repository_backup_config command in order to reference the configurations file that will
restore the repository.
XML File Example
This command imports information from the XML file created from the
export_repository_backup_config command.
job_status
Purpose
The job_status command gets the latest status for a job with the given name and displays it on the screen.
If the job is running, a detailed status message is displayed. If the job is not running, the status of the last run
for that job is displayed.
This command displays the latest status for a job in the screen.
166
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix D: Teradata DSA Commands
job_status_log
Syntax
dsc job_status -n|-name Name -I|-job_execution_id JobExecutionID
Example
dsc job_status -n job1 -I job1ExecutionId
Parameters
n|name Name
The name of the job on which to perform the action. Must be unique for each job.
Example: job1
I|job_execution_id Job Execution ID
[Optional] The execution ID for the job. Must be an integer.
The ID must match a version number listed in the list_save_sets command output for the
same job.
Example: 2
B|repository_backup Repository Backup
[Optional] Flag to return status on repository backup jobs.
XML File Example
This command does not support an XML file.
job_status_log
Purpose
The job_status_log command displays the latest status log for a job with the given name if the job is
running. If the job is not running, the status log for the last run job is displayed.
Syntax
job_status_log -n|-name Name -I|-job_execution_id JobExecutionID -E|-full_export
Example
dsc job_status_log -n job1 -I 123456
Parameters
n|name Name
The name of the job on which to perform the action. Must be unique for each job.
Example: job1
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
167
Appendix D: Teradata DSA Commands
list_access_module
I|job_execution_id Job Execution ID
[Optional] The execution ID for the job. Must be an integer.
The ID must match a version number listed in the list_save_sets command output for the
same job.
Example: 2
B|repository_backup Repository Backup
[Optional] Flag to return status on repository backup jobs.
b|bucket BucketNumber
[Optional] Select a bucket number to display a grouping of data when too many results
return to display at once. The command output notifies you if there are more buckets of
data to display.
-E|-full_export
[Optional] Retrieve and output the entire job status log to a landing zone on the DSC server
to avoid the message size limit. The output file is a comma-separated value with the file
name: <job_name>_<job_execution_id>.csv. The default landing zone directory
is /var/opt/teradata/dsa/export. You can change the directory in dsc.property
using the fullExport.landingZone property.
Note:
This parameter works only for BACKUP, RESTORE, and ANALYZE_VALIDATE jobs
on the Teradata Database. It does not work for DBC repository backup jobs.
XML File Example
This command does not support an XML file.
list_access_module
Purpose
The list_access_module command lists available access module types for a named media server.
Syntax
list_access_module -n|-name Name
Example
list_access_module -n|-name media_server1
Parameters
n|name Name
[Optional] The name of the media server.
XML File Example
This command does not support an XML file.
168
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix D: Teradata DSA Commands
list_components
list_components
Purpose
The list_components command lists components defined and stored in the DSC repository. If a specific
component is requested, that component definition is displayed. Otherwise, a list of the components
matching any provided filters is displayed. Any partial component name returns all components matching
the partial input.
Note:
The Type parameter is required.
Syntax
list_components -e|-enabled true|false -n|-name Name -s|-system System -t|-type Type
Example
list_components -e true -n system1 -t system
Parameters
e|enabled Enabled
[Optional] Filter components based on whether or not they are enabled. This applies only to
system and target_group components.
Example: true
n|name Name
[Optional] The name of the BAR component. The name must be unique. If the system name
begins with the special character -, for example -test , for system type use the Enter
the System Name prompt.
Example: component1
s|system System
[Optional] Filter components based on the associated Teradata system. This applies only to
node components. If the type is system and the system name begins with the special
character "-", for example -test , enter the system name at the Enter the System Name
prompt.
Example: system1
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
169
Appendix D: Teradata DSA Commands
list_consumers
t|type Type
The type of BAR component. Enter one of the following:
•
•
•
•
•
•
•
•
•
system
node
media_server
nbu_server
dd_boost
disk_file_system
target_group
azure_app
aws_app
Example: system
V|virtual
[Optional] Indicates this is a mapping from a virtual target group to a physical target group.
XML File Example
This command does not support an XML file.
list_consumers
Purpose
The list_consumers command sends a request to ActiveMQ to provide information about all of the
consumers of DSA Network Client and DSMain processes. It checks whether the selector values match the
names of the DSA Network Client and DSMain systems and that the processes are running.
Syntax
list_consumers -n|-name System1 -t|-type SYSTEM
Example
list_consumers -n System1 -t SYSTEM
Parameters
n|name Name
[Optional] The name of the BAR component. The name must be unique. If the system name
begins with the special character -, for example -test , for system type use the Enter
the System Name prompt.
Example: System1
t|type Type
[Optional] The type of BAR component. Enter system or media_server.
Example: SYSTEM
170
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix D: Teradata DSA Commands
list_general_settings
Usage Notes
After configuring a system using the config_systems command, you may want to run the
list_consumers command to validate that the selector values match the DSMain system names and that
the processes are running.
After a media server configuration and restart or an auto-discovery of a media server, you may want to run
the list_consumers command to validate that the selector value matches the DSA Network Client system
name and that the consumer processes are running.
XML File Example
This command does not support an XML file.
list_general_settings
Purpose
The list_general_settings command lists all current general settings.
Syntax
list_general_settings
Example
dsc list_general_settings
Parameters
There are no parameters associated with this command.
XML File Example
This command does not support an XML file.
list_job_history
Purpose
The list_job_history command lists the complete history of the job you specify, including the type of
system used during the job execution. If you do not specify a job, the history of all jobs in the DSC repository
are listed.
Parameters
n|name Name
[Optional] The name of the DSA job to display history.
Example: job1
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
171
Appendix D: Teradata DSA Commands
list_jobs
b|bucket BucketNumber
[Optional] Select a bucket number to display a grouping of data when too many results
return to display at once. The command output notifies you if there are more buckets of
data to display.
XML File Example
This command does not support an XML file.
list_jobs
Purpose
The list_jobs command lists jobs defined and stored in the DSC repository. If a specific job is requested,
that job definition is displayed. Otherwise, a list of job names matching any provided filters is displayed. If
no parameters are provided, a list of jobs (excluding backup repository jobs) is displayed.
Syntax
list_jobs [optional parameters]
Example
dsc list_jobs -n job1
Parameters
n|name Name
[Optional] The name of the job definition to display.
Note:
If you specify a name, the command processes the name with wildcards at the beginning
and end of the name. Specifying job1 might produce a list which include job1, job 111
and backupjob1.
Example: job1
o|owner Owner
[This parameter is deprecated after release 15.00. You will receive a warning message if you
use this parameter.] The owner of the job. Job ownership is used to determine the
appropriate privileges given to DSA users.
Example: user
s|state State
[Optional] Enter active or retired. The default is active.
Example: active
172
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix D: Teradata DSA Commands
list_query_backupids
S|status Status
[Optional] The latest status for a job. Enter one of the following:
•
•
•
•
•
•
•
•
•
•
running
completed_successfully
completed_errors
failed
queued
aborting
aborted
new
not_responding
warning
t|type Type
[Optional] The type of job. Enter one of the following:
•
•
•
•
BACKUP
RESTORE
ANALYZE_READ
ANALYZE_VALIDATE
Example: backup
B|repository_backup Repository Backup
[Optional] View repository backup jobs.
b|bucket BucketNumber
[Optional] Select a bucket number to display a grouping of data when too many results
return to display at once. The command output notifies you if there are more buckets of
data to display.
XML File Example
This command does not support an XML file.
list_query_backupids
Purpose
The list_query_backupids command lists the results of the query returned from the
query_backupids command.
Syntax
list_query_backupids -n|name -v|backup_version
Example
dsc list_query_backupids -n MyJob -v LATEST
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
173
Appendix D: Teradata DSA Commands
list_query_nbu_backupids
Parameters
n|name Name
The name of the job on which to perform the action. Must be unique for each job.
Example: MyJob
v|backup_version BackupVersion
[Optional] Backup version number. Type LATEST or 0 for latest save set. Defaults to latest if
no version is entered.
Example: LATEST
B|repository_backup Repository Backup
[Optional] Flag to list backupid of repository backup job.
XML File Example
This command does not support an XML file.
list_query_nbu_backupids
Purpose
The list_query_nbu_backupids command lists the results of the query returned from the
query_nbu_backupids command.
Syntax
list_query_nbu_backupids -n|name -v|backup_version
Example
dsc query_nbu_backupids -n MyJob -v LATEST
Parameters
n|name Name
The name of the job on which to perform the action. Must be unique for each job.
Example: MyJob
v|backup_version BackupVersion
[Optional] Backup version number. Type LATEST or 0 for latest save set. Defaults to latest if
no version is entered.
Example: LATEST
B|repository_backup Repository Backup
[Optional] Flag to list backupid of repository backup job.
XML File Example
This command does not support an XML file.
174
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix D: Teradata DSA Commands
list_recover_backup_metadata
list_recover_backup_metadata
Purpose
The list_recover_backup_metadata command lists the overall status and individual media server
status of the recover_backup_metadata command.
Syntax
list_recover_backup_metadata -n|-name Name
Example
dsc list_recover_backup_metadata -n target_group1
Parameters
n|name Name
The name of the target group used for backup of the DSC repository.
Example: target_group1
XML File Example
This command does not support an XML file.
list_repository_backup_settings
Purpose
The list_repository_backup_settings command lists all current repository backup settings.
Syntax
list_repository_backup_settings
Example
dsc list_repository_backup_settings
Parameters
There are no parameters associated with this command.
XML File Example
This command does not support an XML file.
list_save_sets
Purpose
The list_save_sets command lists all valid save sets for a given job name.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
175
Appendix D: Teradata DSA Commands
list_save_sets
Syntax
list_save_sets -n|-name Name -C -F|-filter Filter -v|backup_version BackupVersion -B|backup_repository
Examples
dsc list_save_sets -n job1 -F last_week
dsc list_save_sets -n targetgroup1 -F last_week -B
dsc list_save_sets -n job2 -C
Parameters
n|name Name
The name of the job or target group for its save sets. If -B is not specified, name refers to a
backup job. If -B is specified, name refers to a repository target group name.
Example: job1
C
[Optional] Lists the correlated save sets that result when you run the skipped objects option
with run_job -I. The save set missing the skipped objects is listed first and the save set
containing the skipped objects is listed second.
The specified job must be part of a correlated pair.
Save sets are not listed if they have expired.
Example: # dsc list_save_sets -n LV500001 -C
F|filter Filter
[Optional] Filter the save sets by the stop_time. Enter one of the following:
• last_week
• last_month
• last_year
Example: last_month
v|backup_version BackupVersion
[Optional] Backup version number. Enter latest or 0 for the latest save set.
Example: latest
B|repository_backup Repository Backup
[Optional] View the backup repository for DSC repository backup save sets.
b|bucket BucketNumber
[Optional] Select a bucket number to display a grouping of data when too many results
return to display at once. The command output notifies you if there are more buckets of
data to display.
Usage Notes
If you specify the -B parameter and no save sets are available, the command builds a job plan from data or
metadata files, if they exist.
If you run list_save_sets -C against a job that has no correlated pairs, the No SaveSets found error is
generated.
176
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix D: Teradata DSA Commands
list_target_group_map
XML File Example
This command does not support an XML file.
list_target_group_map
Purpose
The list_target_group_map command lists the maps between target groups for restoring to a different
client configuration.
Syntax
list_target_group_map -s|-source SampSrc -d|-destination SampDest
Example
dsc list_target_group_map -s SampSrc -d SampDest
Parameters
The parameters are optional. However, if you specify one of the parameters, you must also specify the other
parameter.
s|source SampSrc
The source target group represents the backup target group for mapping.
Example: SampSrc
d|destination SampDest
The destination target group represents the restore target group for mapping.
Required if the virtual target group (parameter V) is specified. It indicates the physical target
group to which the virtual target group is mapped.
Example: SampDest
t|type Type
[Optional] The type of the target group map. This parameter is not used if source or
destination is specified. Target group map types are DDBOOST, REMOTE_FILE, and
REMOTE_MEDIA.
V|virtual
[Optional] Indicates this is a mapping from a virtual target group to a physical target group.
XML File Example
This command does not support an XML file.
list_validate_job_metadata
Purpose
The list_validate_job_metadata command lists the information returned from a successful
validate_job_metadata command.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
177
Appendix D: Teradata DSA Commands
object_release
Syntax
list_validate_job_metadata -n|-name jobName -v|-backup_version version
Example
dsc list_validate_job_metadata -n job1 -v latest
Parameters
n|name Name
The name of the job on which to perform the action. Must be unique for each job.
Example: job1
v|backup_version BackupVersion
[Optional] Backup version number. Enter latest or 0 for the latest save set.
Example: latest
XML File Example
This command does not support an XML file.
object_release
Purpose
The object_release command releases all objects that are currently locked by a job. It does not release
objects for new, running, or queued jobs.
Syntax
object_release -n|-name Name -S|-skip_prompt
Example
dsc object_release -n job1 -S
Parameters
n|name Name
The name of the job on which to perform the action. Must be unique for each job.
Example: job1
S|Skip_prompt SkipPrompt
[Optional] Skips displaying a confirmation message before performing the command action.
u|user_authentication User
Required when security management is enabled. Supplies the command with the Viewpoint
user, and triggers a password prompt for authentication.
Usage Notes
This command does not apply to analyze and analyze_read jobs.
178
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix D: Teradata DSA Commands
purge_jobs
XML File Example
This command does not support an XML file.
purge_jobs
Purpose
The purge_jobs command can be used for cleaning up DSC repository when resources are not released
after aborting jobs. The command will abort any jobs and purge the resources used by any incomplete jobs.
When no job name is provided, it purges resources used by all jobs in BAR and BARBACKUP.
Syntax
purge_jobs -n| -name Name -B| -repository_backup -S| -skip_prompt
Example
dsc purge_jobs -n|-name job1 -B -S
dsc purge_jobs -n <job_name> -- This command will clean up the resources used by a particular job.
dsc purge_jobs -- This command will clean up the resources used by all jobs.
Parameters
n|name Name
[Optional] The name of the job for which resources has to be cleaned up. Must be unique
for each job.
Example: job1
B|repository_backup Repository Backup
[Optional] Purges resources used by repository job in BARBACKUP database.
u|user_authentication User
Required when security management is enabled. Supplies the command with the Viewpoint
user, and triggers a password prompt for authentication.
S|Skip_prompt SkipPrompt
[Optional] Skips displaying a confirmation message before performing the command action.
Usage Notes
purge_jobs command is not a replacement for the abort functionality. If a job hangs, users should abort the
job first. Possible scenarios for using purge_jobs:
• Jobs are getting queued, but DSC shows no jobs are running.
• Job status in DSC shows that job is not running, but when user tries to rerun the job, DSC shows that job
is running. User is not able to abort the job either.
XML File Examples
This command does not support an XML file.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
179
Appendix D: Teradata DSA Commands
query_backupids
query_backupids
Purpose
The query_backupids command queries third-party software for information needed for duplication.
Syntax
query_backupids -n|name -v|backup_version
Example
dsc query_backupids -n MyJob -v LATEST
Parameters
n|name Name
The name of the job on which to perform the action. Must be unique for each job.
Example: MyJob
v|backup_version BackupVersion
[Optional] Backup version number. Type LATEST or 0 for latest save set. Defaults to latest if
no version is entered.
Example: LATEST
B|repository_backup Repository Backup
[Optional] Flag to query backupid of repository backup job.
u|user_authentication User
Required when security management is enabled. Supplies the command with the Viewpoint
user, and triggers a password prompt for authentication.
XML File Example
This command does not support an XML file.
query_nbu_backupids
Purpose
The query_nbu_backupids command queries NetBackup for information needed for a NetBackup
duplicate.
Syntax
query_nbu_backupids -n|name -v|backup_version
Example
dsc query_nbu_backupids -n MyJob -v LATEST
180
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix D: Teradata DSA Commands
recover_backup_metadata
Parameters
n|name Name
The name of the job on which to perform the action. Must be unique for each job.
Example: MyJob
v|backup_version BackupVersion
[Optional] Backup version number. Type LATEST or 0 for latest save set. Defaults to latest if
no version is entered.
Example: LATEST
B|repository_backup Repository Backup
[Optional] Flag to query nbu backupid of repository backup job.
u|user_authentication User
Required when security management is enabled. Supplies the command with the Viewpoint
user, and triggers a password prompt for authentication.
XML File Example
This command does not support an XML file.
recover_backup_metadata
Purpose
The recover_backup_metadata command queries the third party media to recover backup metadata
and rebuild the backup job plan in the case of a disaster to the DSC repository. The command can only run
on repository backup jobs with no save sets.
Syntax
recover_backup_metadata -n|-name Name
Example
dsc recover_backup_metadata -n target_group1
Parameters
n|name Name
The name of the target group used for backup of the DSC repository.
Example: target_group1
u|user_authentication User
Required when security management is enabled. Supplies the command with the Viewpoint
user, and triggers a password prompt for authentication.
XML File Example
This command does not support an XML file.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
181
Appendix D: Teradata DSA Commands
retire_job
retire_job
Purpose
The retire_job command retires an active job. It does not retire a running or queued status job, or one
that is already in the retired state.
Syntax
retire_job -n|-name Name -S|-skip_prompt
Example
dsc retire_job -n job1 -S
Parameters
n|name Name
The name of the job on which to perform the action. Must be unique for each job.
Example: job1
S|Skip_prompt SkipPrompt
[Optional] Skips displaying a confirmation message before performing the command action.
u|user_authentication User
Required when security management is enabled. Supplies the command with the Viewpoint
user, and triggers a password prompt for authentication.
Usage Notes
After a job is retired, no status information is returned for the job. The only commands you can use with a
retired job are: delete_job, activate_job and list_jobs (with a Retired filter).
XML File Example
This command does not support an XML file.
run_job
Purpose
The run_job command runs a job as soon as all necessary resources are available. The DSC system limit is
set at 20 concurrent running jobs, and up to 20 jobs can queue above that limit. The DSC also queues jobs if
the defined target media is not available before the job starts.
Syntax
run_job -n|-name JobName -b|backup_type BackupType -p|-preview -r|runtime -f|file File -w|wait -q|query_status -u|user_authentication User -I|
original_job_execution_ID
Example
dsc run_job -n job1 -b cumulative -p -f file1.xml
182
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix D: Teradata DSA Commands
run_job
dsc run_job -n job3 -b cumulative -I 13
Parameters
n|name Name
The name of the job on which to perform the action. Must be unique for each job.
Example: job1
b|backup_type BackupType
Enter the type of backup: full , delta , or cumulative
Example: delta
p|preview
[Optional] Generates an XML file that lists the job plan and settings. When the -r
parameter is also used, a job plan is generated that includes only systems and media servers
that are online. If the -r parameter is not used, the job plan includes all systems and media
servers, even if they are not online.
r|runtime
[Optional] Checks to see if any of the media servers or systems are down, then generates a
job plan to include only those online media servers or systems. To use the r parameter, you
must also select the -p parameter.
f|file File
[Optional] If you are previewing the job, this is the file path and file name of the output file
to save the job plan.
Example: job1.xml
w|wait
[Optional] Waits until the job has run, then displays a brief status, such as
COMPLETED_ERRORS. You can add the -q parameter for a more detailed status.
q|query_status
[Optional] Returns full status after the job has run. Status includes the percentage of
completion and elapsed time of the job run. To use this parameter, you must also select the
-w parameter.
u|user_authentication User
Required when security management is enabled. Supplies the command with the Viewpoint
user, and triggers a password prompt for authentication.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
183
Appendix D: Teradata DSA Commands
run_repository_job
-I | original_job_execution_ID job ID of the original full backup job
[Optional] Runs a backup or restore job with the objects that were skipped in the original
full backup job. The new job contains only the skipped objects. The job execution ID must
be from:
• A job that completed with errors or warnings
• A job with skipped objects
• The job specified with the -n option
The save set containing the skipped objects is not a base for delta or cumulative backup jobs,
that is, the next backup for the job must be a full backup.
To obtain a job execution ID, you can use the list_job_history command.
Notice:
The run_job command with the -I option is not intended to be mixed with
incremental backup job executions (delta or cumulative). Running incremental
operations after a backup that uses the -I option results in incremental backups that
cannot be restored or that result in an incomplete restore, including loss of data in the
restored objects.
Note:
If you run the backup job using the -I option, but the job completes with errors, you can
use the original save set and run the job again with the -I option. The save set that
results from running the job again includes the objects that were skipped in the original
execution that completed with errors. The newly-generated save sets, together with the
original save sets, are correlated and are required when restoring any object defined in
the backup job definition.
Example: # dsc run_job -n LV500001 -I 13
Usage Notes
The run_job command cannot be used successfully for a retired job.
XML File Example
This command does not require an XML file as input. You must supply a file name and location to which
the XML file results are exported as output.
run_repository_job
Purpose
The run_repository_job command runs a job in the DSC repository.
The run_repository_job command can only be initiated if no operational jobs are running. The -v and
-n parameters are not supported for backup types of jobs, and are optional parameters for restore and
analyze jobs. If -v is not entered, it defaults to the latest successful backup job version. You do not need to
specify a target name for restore or analyze jobs; if you do not, the command takes the target name
configured in config_repository_backup.
184
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix D: Teradata DSA Commands
set_status_rate
Syntax
run_repository_job -t|-type Type
run_repository_job -v|backup_version
run_repository_job -v|backup_version -n|target_name
Examples
dsc run_repository_job -t backup
dsc run_repository_job -t restore -v
dsc run_repository_job -t restore -v -n target1
Parameters
t|type Type
Enter backup, restore, analyze_validate, or analyze_read.
v|backup_version BackupVersion
[Optional] Backup version number. For restore, analyze_read, and analyze_validate jobs
only. Enter latest or 0 for the latest save set.
Example: 0
n|target_name Target Name
[Optional] The target name for the restore or analyze job. If not specified, the restore will be
to the default target group set by the config_repository_backup command.
Example: target1
u|user_authentication User
Required when security management is enabled. Supplies the command with the Viewpoint
user, and triggers a password prompt for authentication.
Usage Notes
This command automatically generates backup, restore, and analyze jobs based on the repository backup
target group configuration the user created. If you use the run_repository_job command to restore the
DSC repository, perform a tpareset after the restore repository job has successfully completed.
XML File Example
This command does not support an XML file.
set_status_rate
Purpose
The set_status_rate command configures the status update rate between DSC and the media servers or
Teradata systems.
Syntax
set_status_rate -n|-name Name -t|-type Type -r|-rate Rate
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
185
Appendix D: Teradata DSA Commands
system_health
Example
dsc set_status_rate -n mediaserver1 -t system -r 15
Parameters
n|name Name
The name of the database system or media server name.
Example: mediaserver1
t|type Type
The type of BAR components to update. Enter system or media_server.
Example: SYSTEM
r|rate Rate
The rate of the status refresh in seconds. The default is 30 seconds. The valid range is
between 30 and 60 seconds.
Example: 45
u|user_authentication User
Required when security management is enabled. Supplies the command with the Viewpoint
user, and triggers a password prompt for authentication.
XML File Example
This command does not support an XML file.
system_health
Purpose
The system_health command lists ActiveMQ system health information such as the memory limit and
memory usage for the main DSA queues.
Syntax
system_health
Example
dsc system_health
Parameters
There are no parameters associated with this command.
Usage Notes
The command can be used to troubleshoot messaging communication over ActiveMQ. If the memory used
in a queue is close to 100%:
• Edit /opt/teradata/tdactivemq/config/td-broker.xml to replace
deleteAllMessagesOnStartup="false" with deleteAllMessagesOnStartup="true".
• Restart the ActiveMQ broker.
186
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix D: Teradata DSA Commands
sync_save_sets
XML File Example
This command does not support an XML file.
sync_save_sets
Purpose
The sync_save_sets command sends a sync request to all NetBackup clients that have save sets older
than the dataset.retention.days value configured in dsc.properties. If the save sets are expired on the
NetBackup side, DSC deletes them from the DSC repository. No scheduled or ad hoc jobs can run until the
deletion completes.
The sync_save_sets user command is a manual alternative to using the checkretention.cronstring
property in dsc.properties to schedule an automatic task.
Syntax
sync_save_sets -S|skip_prompt
Example
dsc sync_save_sets -S
Parameters
S|Skip_prompt SkipPrompt
[Optional] Skips displaying a confirmation message before performing the command action.
u|user_authentication User
Required when security management is enabled. Supplies the command with the Viewpoint
user, and triggers a password prompt for authentication.
Usage Notes
Running this command removes information in DSC about save sets that previously existed on the backup
application. This includes removing restore and analyze jobs that reference each save set. Restore and
analyze jobs that point to an expired save set and are not set to the latest save set are deleted along with the
expired save set. If the restore or analyze job points to the expired save set and is set to the latest save set, the
restore or analyze job fails to run until the attached backup job is run again and creates a new save set to
which the restore or analyze job points. If you do not run a job for a long time and its save sets have been
deleted by NetBackup, running this command could cause the job status to be changed to New.
If a save set with dependent save sets is expired, the dependent save sets are invalidated. Invalid save sets are
not available for use in creating or editing jobs. Any jobs referencing invalid save sets are rejected at run
time.
Notice:
If the physical media of the backup save set becomes damaged, neither auto check retention job nor the
sync_save_sets command should be run until the media is replaced or repaired. If the physical media
cannot be repaired, use sync_save_sets to mark the save set so that any run depending on this save
set is invalidated.
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
187
Appendix D: Teradata DSA Commands
update_job
XML File Example
This command does not support an XML file.
update_job
Purpose
The update_job command updates an existing DSA job based on the information from the command line
parameters or the XML file if provided. Parameter values specified in the command line supersede values
entered for the same parameters in the XML file.
Syntax
update_job -n|-name Name [parameters|File]
Example
dsc update_job -n job1 -f parameters.xml
Parameters
n|name Name
The name of the job on which to perform the action. Must be unique for each job.
Example: job1
d|description Description
[Optional] A meaningful description of the job. To allow a multi-word description, add \"
before and after the description string.
Example: \"backup web apps\"
t|type Type
[Optional] The type of job. Enter one of the following:
•
•
•
•
•
•
•
•
BACKUP
BACKUP_LOGICAL
BACKUP_PHYSICAL
RESTORE
RESTORE_LOGICAL
RESTORE_PHYSICAL
ANALYZE_READ
ANALYZE_VALIDATE
Example: backup
o|owner Owner
[This parameter is deprecated after release 15.00. You will receive a warning message if you
use this parameter.] The owner of the job. Job ownership is used to determine the
appropriate privileges given to DSA users.
Example: user
188
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Appendix D: Teradata DSA Commands
validate_job_metadata
b|backup_name BackupName
[Optional] An existing backup job name. For restore, analyze_read, and analyze_validate
jobs only.
Example: backupWeb1
v|backup_version BackupVersion
[Optional] Backup version number. For restore, analyze_read, and analyze_validate jobs
only. Enter latest or 0 for the latest save set.
Example: 60
f|file File
The full file path and file name of the file containing the necessary parameters to create the
job. If the same parameters are provided both in the file and on the command line, Teradata
DSA uses the values specified in the command line.
Example: backupjob1.xml
u|user_authentication User
Required when security management is enabled. Supplies the command with the Viewpoint
user, and triggers a password prompt for authentication.
Usage Notes
You cannot update a retired job.
XML File Example
This command imports information from the XML file created from the export_job command.
validate_job_metadata
Purpose
The validate_job_metadata command queries NetBackup for information needed to validate the save
set.
The validate_job_metatdata should be run after a backup job has been successfully imported and the
old target group configuration has been mapped to a physical target group.
Syntax
validate_job_metadata -n|-name -v|-backup_version -V -d|-destination
Example
dsc validate_job_metadata -n job1 -v latest -V -d SampDest
Parameters
n|name Name
The name of the job on which to perform the action. Must be unique for each job.
Example: job1
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
189
Appendix D: Teradata DSA Commands
validate_job_metadata
v|backup_version BackupVersion
[Optional] Backup version number. Enter latest or 0 for the latest save set.
Example: latest
V|virtual
[Optional] Indicates this is a mapping from a virtual target group to a physical target group.
d|destination SampDest
Required if the virtual target group (parameter V) is specified. It indicates the physical target
group to which the virtual target group is mapped.
Example: SampDest
XML File Example
This command does not support an XML file.
190
Teradata Data Stream Architecture (DSA) User Guide, Release 15.11
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertising