advertisement
Chapter 28
Migration and Copy
The term Migration, as used in the context of Bacula, means moving data from one Volume to another. In particular it refers to a Job (similar to a backup job) that reads data that was previously backed up to a
Volume and writes it to another Volume. As part of this process, the File catalog records associated with the first backup job are purged. In other words, Migration moves Bacula Job data from one Volume to another by reading the Job data from the Volume it is stored on, writing it to a different Volume in a different Pool, and then purging the database records for the first Job.
The Copy process is essentially identical to the Migration feature with the exception that the Job that is copied is left unchanged. This essentially creates two identical copies of the same backup. However, the copy is treated as a copy rather than a backup job, and hence is not directly available for restore. If bacula founds a copy when a job record is purged (deleted) from the catalog, it will promote the copy as real backup and will make it available for automatic restore.
The Copy and the Migration jobs run without using the File daemon by copying the data from the old backup Volume to a different Volume in a different Pool.
The section process for which Job or Jobs are migrated can be based on quite a number of different criteria such as:
• a single previous Job
• a Volume
• a Client
• a regular expression matching a Job, Volume, or Client name
• the time a Job has been on a Volume
• high and low water marks (usage or occupation) of a Pool
• Volume size
The details of these selection criteria will be defined below.
To run a Migration job, you must first define a Job resource very similar to a Backup Job but with Type =
Migrate instead of Type = Backup. One of the key points to remember is that the Pool that is specified for the migration job is the only pool from which jobs will be migrated, with one exception noted below. In addition, the Pool to which the selected Job or Jobs will be migrated is defined by the Next Pool = ... in the Pool resource specified for the Migration Job.
Bacula permits Pools to contain Volumes with different Media Types. However, when doing migration, this is a very undesirable condition. For migration to work properly, you should use Pools containing only
Volumes of the same Media Type for all migration jobs.
253
254 Bacula Version 5.0.3
The migration job normally is either manually started or starts from a Schedule much like a backup job. It searches for a previous backup Job or Jobs that match the parameters you have specified in the migration
Job resource, primarily a Selection Type (detailed a bit later). Then for each previous backup JobId found, the Migration Job will run a new Job which copies the old Job data from the previous Volume to a new Volume in the Migration Pool. It is possible that no prior Jobs are found for migration, in which case, the Migration job will simply terminate having done nothing, but normally at a minimum, three jobs are involved during a migration:
• The currently running Migration control Job. This is only a control job for starting the migration child jobs.
• The previous Backup Job (already run). The File records for this Job are purged if the Migration job successfully terminates. The original data remains on the Volume until it is recycled and rewritten.
• A new Migration Backup Job that moves the data from the previous Backup job to the new Volume.
If you subsequently do a restore, the data will be read from this Job.
If the Migration control job finds a number of JobIds to migrate (e.g. it is asked to migrate one or more
Volumes), it will start one new migration backup job for each JobId found on the specified Volumes. Please note that Migration doesn’t scale too well since Migrations are done on a Job by Job basis. This if you select a very large volume or a number of volumes for migration, you may have a large number of Jobs that start.
Because each job must read the same Volume, they will run consecutively (not simultaneously).
28.1
Migration and Copy Job Resource Directives
The following directives can appear in a Director’s Job resource, and they are used to define a Migration job.
Pool = <Pool-name> The Pool specified in the Migration control Job is not a new directive for the Job resource, but it is particularly important because it determines what Pool will be examined for finding
JobIds to migrate. The exception to this is when Selection Type = SQLQuery, and although a
Pool directive must still be specified, no Pool is used, unless you specifically include it in the SQL query. Note, in any case, the Pool resource defined by the Pool directove must contain a Next Pool
= ... directive to define the Pool to which the data will be migrated.
Type = Migrate Migrate is a new type that defines the job that is run as being a Migration Job. A
Migration Job is a sort of control job and does not have any Files associated with it, and in that sense they are more or less like an Admin job. Migration jobs simply check to see if there is anything to
Migrate then possibly start and control new Backup jobs to migrate the data from the specified Pool to another Pool. Note, any original JobId that is migrated will be marked as having been migrated, and the original JobId can nolonger be used for restores; all restores will be done from the new migrated
Job.
Type = Copy Copy is a new type that defines the job that is run as being a Copy Job. A Copy Job is a sort of control job and does not have any Files associated with it, and in that sense they are more or less like an Admin job. Copy jobs simply check to see if there is anything to Copy then possibly start and control new Backup jobs to copy the data from the specified Pool to another Pool. Note that when a copy is made, the original JobIds are left unchanged. The new copies can not be used for restoration unless you specifically choose them by JobId. If you subsequently delete a JobId that has a copy, the copy will be automatically upgraded to a Backup rather than a Copy, and it will subsequently be used for restoration.
Selection Type = <Selection-type-keyword> The <Selection-type-keyword> determines how the migration job will go about selecting what JobIds to migrate. In most cases, it is used in conjunction with a Selection Pattern to give you fine control over exactly what JobIds are selected. The possible values for <Selection-type-keyword> are:
Bacula Version 5.0.3
255
SmallestVolume This selection keyword selects the volume with the fewest bytes from the Pool to be migrated. The Pool to be migrated is the Pool defined in the Migration Job resource. The migration control job will then start and run one migration backup job for each of the Jobs found on this Volume. The Selection Pattern, if specified, is not used.
OldestVolume This selection keyword selects the volume with the oldest last write time in the Pool to be migrated. The Pool to be migrated is the Pool defined in the Migration Job resource. The migration control job will then start and run one migration backup job for each of the Jobs found on this Volume. The Selection Pattern, if specified, is not used.
Client The Client selection type, first selects all the Clients that have been backed up in the Pool specified by the Migration Job resource, then it applies the Selection Pattern (defined below) as a regular expression to the list of Client names, giving a filtered Client name list. All jobs that were backed up for those filtered (regexed) Clients will be migrated. The migration control job will then start and run one migration backup job for each of the JobIds found for those filtered
Clients.
Volume The Volume selection type, first selects all the Volumes that have been backed up in the Pool specified by the Migration Job resource, then it applies the Selection Pattern (defined below) as a regular expression to the list of Volume names, giving a filtered Volume list. All JobIds that were backed up for those filtered (regexed) Volumes will be migrated. The migration control job will then start and run one migration backup job for each of the JobIds found on those filtered
Volumes.
Job The Job selection type, first selects all the Jobs (as defined on the Name directive in a Job resource) that have been backed up in the Pool specified by the Migration Job resource, then it applies the Selection Pattern (defined below) as a regular expression to the list of Job names, giving a filtered Job name list. All JobIds that were run for those filtered (regexed) Job names will be migrated. Note, for a given Job named, they can be many jobs (JobIds) that ran. The migration control job will then start and run one migration backup job for each of the Jobs found.
SQLQuery The SQLQuery selection type, used the Selection Pattern as an SQL query to obtain the JobIds to be migrated. The Selection Pattern must be a valid SELECT SQL statement for your SQL engine, and it must return the JobId as the first field of the SELECT.
PoolOccupancy This selection type will cause the Migration job to compute the total size of the specified pool for all Media Types combined. If it exceeds the Migration High Bytes defined in the Pool, the Migration job will migrate all JobIds beginning with the oldest Volume in the pool (determined by Last Write time) until the Pool bytes drop below the Migration Low
Bytes defined in the Pool. This calculation should be consider rather approximative because it is made once by the Migration job before migration is begun, and thus does not take into account additional data written into the Pool during the migration. In addition, the calculation of the total Pool byte size is based on the Volume bytes saved in the Volume (Media) database entries.
The bytes calculate for Migration is based on the value stored in the Job records of the Jobs to be migrated. These do not include the Storage daemon overhead as is in the total Pool size. As a consequence, normally, the migration will migrate more bytes than strictly necessary.
PoolTime The PoolTime selection type will cause the Migration job to look at the time each JobId has been in the Pool since the job ended. All Jobs in the Pool longer than the time specified on
Migration Time directive in the Pool resource will be migrated.
PoolUncopiedJobs This selection which copies all jobs from a pool to an other pool which were not copied before is available only for copy Jobs.
Selection Pattern = <Quoted-string> The Selection Patterns permitted for each Selection-typekeyword are described above.
For the OldestVolume and SmallestVolume, this Selection pattern is not used (ignored).
For the Client, Volume, and Job keywords, this pattern must be a valid regular expression that will filter the appropriate item names found in the Pool.
For the SQLQuery keyword, this pattern must be a valid SELECT SQL statement that returns JobIds.
256
28.2
Migration Pool Resource Directives
Bacula Version 5.0.3
The following directives can appear in a Director’s Pool resource, and they are used to define a Migration job.
Migration Time = <time-specification> If a PoolTime migration is done, the time specified here in seconds (time modifiers are permitted – e.g. hours, ...) will be used. If the previous Backup Job or
Jobs selected have been in the Pool longer than the specified PoolTime, then they will be migrated.
Migration High Bytes = <byte-specification> This directive specifies the number of bytes in the Pool which will trigger a migration if a PoolOccupancy migration selection type has been specified. The fact that the Pool usage goes above this level does not automatically trigger a migration job. However, if a migration job runs and has the PoolOccupancy selection type set, the Migration High Bytes will be applied. Bacula does not currently restrict a pool to have only a single Media Type, so you must keep in mind that if you mix Media Types in a Pool, the results may not be what you want, as the
Pool count of all bytes will be for all Media Types combined.
Migration Low Bytes = <byte-specification> This directive specifies the number of bytes in the Pool which will stop a migration if a PoolOccupancy migration selection type has been specified and triggered by more than Migration High Bytes being in the pool. In other words, once a migration job is started with PoolOccupancy migration selection and it determines that there are more than
Migration High Bytes, the migration job will continue to run jobs until the number of bytes in the
Pool drop to or below Migration Low Bytes.
Next Pool = <pool-specification> The Next Pool directive specifies the pool to which Jobs will be migrated. This directive is required to define the Pool into which the data will be migrated. Without this directive, the migration job will terminate in error.
Storage = <storage-specification> The Storage directive specifies what Storage resource will be used for all Jobs that use this Pool. It takes precedence over any other Storage specifications that may have been given such as in the Schedule Run directive, or in the Job resource. We highly recommend that you define the Storage resource to be used in the Pool rather than elsewhere (job, schedule run, ...).
28.3
Important Migration Considerations
• Each Pool into which you migrate Jobs or Volumes must contain Volumes of only one Media Type.
• Migration takes place on a JobId by JobId basis. That is each JobId is migrated in its entirety and independently of other JobIds. Once the Job is migrated, it will be on the new medium in the new Pool, but for the most part, aside from having a new JobId, it will appear with all the same characteristics of the original job (start, end time, ...). The column RealEndTime in the catalog Job table will contain the time and date that the Migration terminated, and by comparing it with the EndTime column you can tell whether or not the job was migrated. The original job is purged of its File records, and its
Type field is changed from ”B” to ”M” to indicate that the job was migrated.
• Jobs on Volumes will be Migration only if the Volume is marked, Full, Used, or Error. Volumes that are still marked Append will not be considered for migration. This prevents Bacula from attempting to read the Volume at the same time it is writing it. It also reduces other deadlock situations, as well as avoids the problem that you migrate a Volume and later find new files appended to that Volume.
• As noted above, for the Migration High Bytes, the calculation of the bytes to migrate is somewhat approximate.
• If you keep Volumes of different Media Types in the same Pool, it is not clear how well migration will work. We recommend only one Media Type per pool.
• It is possible to get into a resource deadlock where Bacula does not find enough drives to simultaneously read and write all the Volumes needed to do Migrations. For the moment, you must take care as all the resource deadlock algorithms are not yet implemented.
Bacula Version 5.0.3
257
• Migration is done only when you run a Migration job. If you set a Migration High Bytes and that number of bytes is exceeded in the Pool no migration job will automatically start. You must schedule the migration jobs, and they must run for any migration to take place.
• If you migrate a number of Volumes, a very large number of Migration jobs may start.
• Figuring out what jobs will actually be migrated can be a bit complicated due to the flexibility provided by the regex patterns and the number of different options. Turning on a debug level of 100 or more will provide a limited amount of debug information about the migration selection process.
• Bacula currently does only minimal Storage conflict resolution, so you must take care to ensure that you don’t try to read and write to the same device or Bacula may block waiting to reserve a drive that it will never find. In general, ensure that all your migration pools contain only one Media Type, and that you always migrate to pools with different Media Types.
• The Next Pool = ... directive must be defined in the Pool referenced in the Migration Job to define the Pool into which the data will be migrated.
• Pay particular attention to the fact that data is migrated on a Job by Job basis, and for any particular
Volume, only one Job can read that Volume at a time (no simultaneous read), so migration jobs that all reference the same Volume will run sequentially. This can be a potential bottle neck and does not scale very well to large numbers of jobs.
• Only migration of Selection Types of Job and Volume have been carefully tested. All the other migration methods (time, occupancy, smallest, oldest, ...) need additional testing.
• Migration is only implemented for a single Storage daemon. You cannot read on one Storage daemon and write on another.
28.4
Example Migration Jobs
When you specify a Migration Job, you must specify all the standard directives as for a Job. However, certain such as the Level, Client, and FileSet, though they must be defined, are ignored by the Migration job because the values from the original job used instead.
As an example, suppose you have the following Job that you run every night. To note: there is no Storage directive in the Job resource; there is a Storage directive in each of the Pool resources; the Pool to be migrated (File) contains a Next Pool directive that defines the output Pool (where the data is written by the migration job).
# Define the backup Job
Job {
Name = "NightlySave"
Type = Backup
Level = Incremental
Client=rufus-fd
FileSet="Full Set"
Schedule = "WeeklyCycle"
Messages = Standard
Pool = Default
}
# Default pool definition
Pool {
Name = Default
Pool Type = Backup
AutoPrune = yes
Recycle = yes
Next Pool = Tape
Storage = File
LabelFormat = "File"
}
# Tape pool definition
Pool {
# default
258
}
Name = Tape
Pool Type = Backup
AutoPrune = yes
Recycle = yes
Storage = DLTDrive
# Definition of File storage device
Storage {
Name = File
Address = rufus
Password = "ccV3lVTsQRsdIUGyab0N4sMDavui2hOBkmpBU0aQKOr9"
Device = "File" # same as Device in Storage daemon
Media Type = File # same as MediaType in Storage daemon
}
# Definition of DLT tape storage device
Storage {
Name = DLTDrive
Address = rufus
Password = "ccV3lVTsQRsdIUGyab0N4sMDavui2hOBkmpBU0aQKOr9"
Device = "HP DLT 80" # same as Device in Storage daemon
Media Type = DLT8000 # same as MediaType in Storage daemon
}
Bacula Version 5.0.3
Where we have included only the essential information – i.e. the Director, FileSet, Catalog, Client, Schedule, and Messages resources are omitted.
As you can see, by running the NightlySave Job, the data will be backed up to File storage using the Default pool to specify the Storage as File.
Now, if we add the following Job resource to this conf file.
Job {
Name = "migrate-volume"
Type = Migrate
Level = Full
Client = rufus-fd
FileSet = "Full Set"
Messages = Standard
Pool = Default
Maximum Concurrent Jobs = 4
Selection Type = Volume
Selection Pattern = "File"
} and then run the job named migrate-volume, all volumes in the Pool named Default (as specified in the migrate-volume Job that match the regular expression pattern File will be migrated to tape storage
DLTDrive because the Next Pool in the Default Pool specifies that Migrations should go to the pool named Tape, which uses Storage DLTDrive.
If instead, we use a Job resource as follows:
Job {
Name = "migrate"
Type = Migrate
Level = Full
Client = rufus-fd
FileSet="Full Set"
Messages = Standard
Pool = Default
Maximum Concurrent Jobs = 4
Selection Type = Job
Selection Pattern = ".*Save"
}
All jobs ending with the name Save will be migrated from the File Default to the Tape Pool, or from File storage to Tape storage.
advertisement
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
advertisement
Table of contents
- 18 What is Bacula?
- 18 Who Needs Bacula?
- 18 Bacula Components or Services
- 21 Bacula Configuration
- 21 Conventions Used in this Document
- 22 Quick Start
- 22 Terminology
- 24 What Bacula is Not
- 24 Interactions Between the Bacula Services
- 26 Release Version 5.0.2 and 5.0.3
- 28 New Features in 5.0.1
- 28 Truncate Volume after Purge
- 29 Allow Higher Duplicates
- 29 Cancel Lower Level Duplicates
- 30 New Features in 5.0.0
- 30 Maximum Concurrent Jobs for Devices
- 30 Restore from Multiple Storage Daemons
- 30 File Deduplication using Base Jobs
- 31 AllowCompression = <yes|no>
- 31 Accurate Fileset Options
- 32 Tab-completion for Bconsole
- 32 Pool File and Job retention
- 32 Read-only File Daemon using capabilities
- 33 Bvfs API
- 33 Testing your Tape Drive
- 34 New Block Checksum Device Directive
- 34 New Bat Features
- 34 Media List View
- 34 Media Information View
- 34 Job Information View
- 36 Autochanger Content View
- 37 Bat on Windows
- 37 New Win32 Installer
- 37 Win64 Installer
- 37 Linux Bare Metal Recovery USB Key
- 37 bconsole Timeout Option
- 37 Important Changes
- 38 Custom Catalog queries
- 38 Deprecated parts
- 38 Misc Changes
- 40 Released Version 3.0.3 and 3.0.3a
- 42 New Features in Released Version 3.0.2
- 42 Full Restore from a Given JobId
- 43 Source Address
- 43 Show volume availability when doing restore
- 43 Accurate estimate command
- 46 New Features in 3.0.0
- 46 Accurate Backup
- 46 Accurate = <yes|no>
- 46 Copy Jobs
- 49 ACL Updates
- 50 Extended Attributes
- 51 Shared objects
- 51 Building Static versions of Bacula
- 51 Virtual Backup (Vbackup)
- 53 Catalog Format
- 53 64 bit Windows Client
- 54 Duplicate Job Control
- 54 Allow Duplicate Jobs = <yes|no>
- 54 Allow Higher Duplicates = <yes|no>
- 54 Cancel Running Duplicates = <yes|no>
- 55 Cancel Queued Duplicates = <yes|no>
- 55 TLS Authentication
- 55 TLS Authenticate = yes
- 55 bextract non-portable Win32 data
- 55 State File updated at Job Termination
- 55 MaxFullInterval = <time-interval>
- 56 MaxDiffInterval = <time-interval>
- 56 Honor No Dump Flag = <yes|no>
- 56 Exclude Dir Containing = <filename-string>
- 57 Bacula Plugins
- 57 Plugin Directory
- 57 Plugin Options
- 57 Plugin Options ACL
- 57 Plugin = <plugin-command-string>
- 58 The bpipe Plugin
- 59 Microsoft Exchange Server 2003/2007 Plugin
- 59 Background
- 59 Concepts
- 59 Installing
- 60 Backing Up
- 60 Restoring
- 61 Restoring to the Recovery Storage Group
- 61 Restoring on Microsoft Server 2007
- 61 Caveats
- 62 libdbi Framework
- 63 Console Command Additions and Enhancements
- 63 Display Autochanger Content
- 63 list joblog job=xxx or jobid=nnn
- 63 Use separator for multiple commands
- 63 Deleting Volumes
- 64 Bare Metal Recovery
- 64 Miscellaneous
- 64 Allow Mixed Priority = <yes|no>
- 65 Bootstrap File Directive – FileRegex
- 65 Bootstrap File Optimization Changes
- 65 Solaris ZFS/NFSv4 ACLs
- 65 Virtual Tape Emulation
- 66 Bat Enhancements
- 66 RunScript Enhancements
- 66 Status Enhancements
- 66 Connect Timeout
- 66 ftruncate for NFS Volumes
- 66 Support for Ubuntu
- 67 Recycle Pool = <pool-name>
- 67 FD Version
- 67 Max Run Sched Time = <time-period-in-seconds>
- 67 Max Wait Time = <time-period-in-seconds>
- 67 Incremental|Differential Max Wait Time = <time-period-in-seconds>
- 67 Max Run Time directives
- 67 Statistics Enhancements
- 68 ScratchPool = <pool-resource-name>
- 69 Enhanced Attribute Despooling
- 69 SpoolSize = <size-specification-in-bytes>
- 69 MaximumConsoleConnections = <number>
- 69 VerId = <string>
- 69 dbcheck enhancements
- 69 --docdir configure option
- 70 --htmldir configure option
- 70 --with-plugindir configure option
- 72 The Current State of Bacula
- 72 What is Implemented
- 74 Advantages Over Other Backup Programs
- 74 Current Implementation Restrictions
- 75 Design Limitations or Restrictions
- 75 Items to Note
- 76 System Requirements
- 78 Supported Operating Systems
- 80 Supported Tape Drives
- 81 Unsupported Tape Drives
- 81 FreeBSD Users Be Aware!!!
- 81 Supported Autochangers
- 81 Tape Specifications
- 84 Getting Started with Bacula
- 84 Understanding Jobs and Schedules
- 84 Understanding Pools, Volumes and Labels
- 85 Setting Up Bacula Configuration Files
- 85 Configuring the Console Program
- 86 Configuring the Monitor Program
- 86 Configuring the File daemon
- 87 Configuring the Director
- 87 Configuring the Storage daemon
- 88 Testing your Configuration Files
- 88 Testing Compatibility with Your Tape Drive
- 88 Get Rid of the /lib/tls Directory
- 88 Running Bacula
- 89 Log Rotation
- 89 Log Watch
- 89 Disaster Recovery
- 90 Installing Bacula
- 90 Source Release Files
- 91 Upgrading Bacula
- 92 Releases Numbering
- 93 Dependency Packages
- 94 Supported Operating Systems
- 94 Building Bacula from Source
- 97 What Database to Use?
- 97 Quick Start
- 98 Configure Options
- 103 Recommended Options for Most Systems
- 103 Red Hat
- 104 Solaris
- 105 FreeBSD
- 105 Win32
- 105 One File Configure Script
- 106 Installing Bacula
- 106 Building a File Daemon or Client
- 106 Auto Starting the Daemons
- 107 Other Make Notes
- 108 Installing Tray Monitor
- 108 GNOME
- 109 KDE
- 109 Other window managers
- 109 Modifying the Bacula Configuration Files
- 110 Critical Items to Implement Before Production
- 110 Critical Items
- 111 Recommended Items
- 112 A Brief Tutorial
- 112 Before Running Bacula
- 113 Starting the Database
- 113 Starting the Daemons
- 113 Using the Director to Query and Start Jobs
- 115 Running a Job
- 119 Restoring Your Files
- 121 Quitting the Console Program
- 121 Adding a Second Client
- 122 When The Tape Fills
- 124 Other Useful Console Commands
- 124 Debug Daemon Output
- 125 Patience When Starting Daemons or Mounting Blank Tapes
- 125 Difficulties Connecting from the FD to the SD
- 125 Daemon Command Line Options
- 126 Creating a Pool
- 126 Labeling Your Volumes
- 127 Labeling Volumes with the Console Program
- 130 Customizing the Configuration Files
- 131 Character Sets
- 132 Resource Directive Format
- 132 Comments
- 132 Upper and Lower Case and Spaces
- 132 Including other Configuration Files
- 133 Recognized Primitive Data Types
- 134 Resource Types
- 134 Names, Passwords and Authorization
- 135 Detailed Information for each Daemon
- 136 Configuring the Director
- 136 Director Resource Types
- 137 The Director Resource
- 139 The Job Resource
- 154 The JobDefs Resource
- 154 The Schedule Resource
- 157 Technical Notes on Schedules
- 157 The FileSet Resource
- 168 FileSet Examples
- 173 Backing up Raw Partitions
- 173 Excluding Files and Directories
- 173 Windows FileSets
- 175 Testing Your FileSet
- 176 The Client Resource
- 177 The Storage Resource
- 179 The Pool Resource
- 185 The Scratch Pool
- 185 The Catalog Resource
- 186 The Messages Resource
- 187 The Console Resource
- 188 The Counter Resource
- 189 Example Director Configuration File
- 192 Client/File daemon Configuration
- 192 The Client Resource
- 194 The Director Resource
- 195 The Message Resource
- 195 Example Client Configuration File
- 196 Storage Daemon Configuration
- 196 Storage Resource
- 198 Director Resource
- 198 Device Resource
- 206 Edit Codes for Mount and Unmount Directives
- 206 Devices that require a mount (DVD)
- 208 Autochanger Resource
- 209 Capabilities
- 209 Messages Resource
- 209 Sample Storage Daemon Configuration File
- 212 Messages Resource
- 216 Console Configuration
- 216 General
- 216 The Director Resource
- 217 The ConsoleFont Resource
- 217 The Console Resource
- 219 Console Commands
- 219 Sample Console Configuration File
- 220 Monitor Configuration
- 220 The Monitor Resource
- 220 The Director Resource
- 221 The Client Resource
- 221 The Storage Resource
- 222 Tray Monitor Security
- 222 Sample Tray Monitor configuration
- 223 Sample File daemon's Director record.
- 223 Sample Storage daemon's Director record.
- 223 Sample Director's Console record.
- 224 The Restore Command
- 224 General
- 224 The Restore Command
- 229 Restore a pruned job using a pattern
- 229 Selecting Files by Filename
- 230 Replace Options
- 231 Command Line Arguments
- 232 Using File Relocation
- 232 Introduction
- 232 RegexWhere Format
- 233 Restoring Directory Attributes
- 233 Restoring on Windows
- 234 Restoring Files Can Be Slow
- 234 Problems Restoring Files
- 235 Restore Errors
- 235 Example Restore Job Resource
- 235 File Selection Commands
- 237 Restoring When Things Go Wrong
- 242 Automatic Volume Recycling
- 243 Automatic Pruning
- 243 Pruning Directives
- 245 Recycling Algorithm
- 246 Recycle Status
- 247 Making Bacula Use a Single Tape
- 247 Daily, Weekly, Monthly Tape Usage Example
- 249 Automatic Pruning and Recycling Example
- 250 Manually Recycling Volumes
- 252 Basic Volume Management
- 252 Key Concepts and Resource Records
- 253 Pool Options to Limit the Volume Usage
- 254 Automatic Volume Labeling
- 254 Restricting the Number of Volumes and Recycling
- 255 Concurrent Disk Jobs
- 256 An Example
- 258 Backing up to Multiple Disks
- 259 Considerations for Multiple Clients
- 264 Automated Disk Backup
- 264 The Problem
- 264 The Solution
- 265 Overall Design
- 265 Full Pool
- 266 Differential Pool
- 266 Incremental Pool
- 266 The Actual Conf Files
- 270 Migration and Copy
- 271 Migration and Copy Job Resource Directives
- 273 Migration Pool Resource Directives
- 273 Important Migration Considerations
- 274 Example Migration Jobs
- 276 File Deduplication using Base Jobs
- 278 Backup Strategies
- 278 Simple One Tape Backup
- 278 Advantages
- 278 Disadvantages
- 278 Practical Details
- 279 Manually Changing Tapes
- 279 Daily Tape Rotation
- 279 Advantages
- 280 Disadvantages
- 280 Practical Details
- 284 Autochanger Support
- 285 Knowing What SCSI Devices You Have
- 286 Example Scripts
- 286 Slots
- 286 Multiple Devices
- 287 Device Configuration Records
- 290 Autochanger Resource
- 291 An Example Configuration File
- 291 A Multi-drive Example Configuration File
- 292 Specifying Slots When Labeling
- 293 Changing Cartridges
- 293 Dealing with Multiple Magazines
- 294 Simulating Barcodes in your Autochanger
- 294 The Full Form of the Update Slots Command
- 295 FreeBSD Issues
- 295 Testing Autochanger and Adapting mtx-changer script
- 296 Using the Autochanger
- 297 Barcode Support
- 298 Use bconsole to display Autochanger content
- 298 Bacula Autochanger Interface
- 300 Supported Autochangers
- 304 Data Spooling
- 304 Data Spooling Directives
- 305 !!! MAJOR WARNING !!!
- 305 Other Points
- 306 Using Bacula catalog to grab information
- 306 Job statistics
- 308 ANSI and IBM Tape Labels
- 308 Director Pool Directive
- 308 Storage Daemon Device Directives
- 310 The Windows Version of Bacula
- 310 Win32 Installation
- 314 Post Win32 Installation
- 314 Uninstalling Bacula on Win32
- 314 Dealing with Win32 Problems
- 316 Windows Compatibility Considerations
- 317 Volume Shadow Copy Service
- 318 VSS Problems
- 319 Windows Firewalls
- 319 Windows Port Usage
- 319 Windows Disaster Recovery
- 319 Windows Restore Problems
- 320 Windows Ownership and Permissions Problems
- 320 Manually resetting the Permissions
- 322 Backing Up the WinNT/XP/2K System State
- 323 Considerations for Filename Specifications
- 323 Win32 Specific File daemon Command Line
- 324 Shutting down Windows Systems
- 326 Disaster Recovery Using Bacula
- 326 General
- 326 Important Considerations
- 326 Steps to Take Before Disaster Strikes
- 327 Bare Metal Recovery on Linux with a Rescue CD
- 327 Requirements
- 327 Restoring a Client System
- 328 Boot with your Rescue CDROM
- 330 Restoring a Server
- 331 Linux Problems or Bugs
- 331 Bare Metal Recovery using a LiveCD
- 332 FreeBSD Bare Metal Recovery
- 333 Solaris Bare Metal Recovery
- 333 Preparing Solaris Before a Disaster
- 334 Bugs and Other Considerations
- 334 Disaster Recovery of Win32 Systems
- 334 Ownership and Permissions on Win32 Systems
- 335 Alternate Disaster Recovery Suggestion for Win32 Systems
- 335 Restoring to a Running System
- 336 Additional Resources
- 338 Bacula TLS – Communications Encryption
- 338 TLS Configuration Directives
- 339 Creating a Self-signed Certificate
- 340 Getting a CA Signed Certificate
- 340 Example TLS Configuration Files
- 344 Data Encryption
- 345 Building Bacula with Encryption Support
- 345 Encryption Technical Details
- 345 Decrypting with a Master Key
- 346 Generating Private/Public Encryption Keys
- 346 Example Data Encryption Configuration
- 348 Using Bacula to Improve Computer Security
- 348 The Details
- 349 Running the Verify
- 350 What To Do When Differences Are Found
- 351 A Verify Configuration Example
- 354 Installing and Configuring MySQL
- 354 Installing and Configuring MySQL – Phase I
- 355 Installing and Configuring MySQL – Phase II
- 356 Re-initializing the Catalog Database
- 356 Linking Bacula with MySQL
- 357 Installing MySQL from RPMs
- 357 Upgrading MySQL
- 358 Installing and Configuring PostgreSQL
- 358 Installing PostgreSQL
- 359 Configuring PostgreSQL
- 361 Re-initializing the Catalog Database
- 361 Installing PostgreSQL from RPMs
- 362 Converting from MySQL to PostgreSQL
- 363 Upgrading PostgreSQL
- 364 Tuning PostgreSQL
- 364 Credits
- 366 Installing and Configuring SQLite
- 366 Installing and Configuring SQLite – Phase I
- 367 Installing and Configuring SQLite – Phase II
- 367 Linking Bacula with SQLite
- 367 Testing SQLite
- 368 Re-initializing the Catalog Database
- 370 Catalog Maintenance
- 370 Setting Retention Periods
- 371 Compacting Your MySQL Database
- 372 Repairing Your MySQL Database
- 372 MySQL Table is Full
- 373 MySQL Server Has Gone Away
- 373 MySQL Temporary Tables
- 373 Repairing Your PostgreSQL Database
- 373 Database Performance Issues
- 374 Performance Issues Indexes
- 374 PostgreSQL Indexes
- 374 MySQL Indexes
- 375 SQLite Indexes
- 375 Compacting Your PostgreSQL Database
- 376 Compacting Your SQLite Database
- 376 Migrating from SQLite to MySQL or PostgreSQL
- 376 Backing Up Your Bacula Database
- 377 Security considerations
- 378 Backing Up Third Party Databases
- 378 Database Size
- 380 Bacula Security Issues
- 381 Backward Compatibility
- 381 Configuring and Testing TCP Wrappers
- 383 Running as non-root
- 384 The Bootstrap File
- 384 Bootstrap File Format
- 387 Automatic Generation of Bootstrap Files
- 388 Bootstrap for bscan
- 388 A Final Bootstrap Example
- 390 Bacula Copyright, Trademark, and Licenses
- 390 FDL
- 390 GPL
- 390 LGPL
- 390 Public Domain
- 391 Trademark
- 391 Fiduciary License Agreement
- 391 Disclaimer
- 392 GNU Free Documentation License
- 398 Table of Contents
- 398 GNU GENERAL PUBLIC LICENSE
- 398 Preamble
- 399 TERMS AND CONDITIONS
- 402 How to Apply These Terms to Your New Programs
- 403 Table of Contents
- 403 GNU LESSER GENERAL PUBLIC LICENSE
- 403 Preamble
- 404 TERMS AND CONDITIONS
- 409 How to Apply These Terms to Your New Libraries
- 410 Thanks
- 412 Bacula Bugs