Pro Oracle Database 11g Administration (58 page)

Read Pro Oracle Database 11g Administration Online

Authors: Darl Kuhn

Tags: #Oracle DBA

BOOK: Pro Oracle Database 11g Administration
5.21Mb size Format: txt, pdf, ePub

Current Parallelism: 6

334

CHAPTER 13 ■ DATA PUMP


Note
The PARALLEL feature is only available in the Enterprise Edition of the Oracle Database.

Specifying Additional Dump Files

If you run out of space in the primary dump-file location, then you can specify additional dump-file locations on the fly. Use the ADD_FILE command from the interactive command prompt. Here’s the basic syntax for adding additional files:

ADD_FILE=[directory_object:]file_name [,...]

This example adds another output file to an already existing Data Pump export job: Export> add_file=alt2.dmp

You can also specify a separate database-directory object:

Export> add_file=alt_dir:alt3.dmp

Reusing Output File Names

By default, Data Pump doesn’t overwrite an existing dump file. For example, the first time you run this job, it will run fine because there is no dump file named inv.dmp in the directory being used: $ expdp heera/foo directory=dk dumpfile=inv.dmp

If you attempt to run the previous command again with the same directory and the same dump-file name, this error is thrown:

ORA-39000: bad dump file specification

ORA-31641: unable to create dump file "/home/oracle/dp/inv.dmp"

ORA-27038: created file already exists

You can either specify a new dump-file name for the export job or use the REUSE_DUMPFILES

parameter to direct Data Pump to overwrite an existing dump file. For example: $ expdp heera/foo directory=dk dumpfile=inv.dmp reuse_dumpfiles=y

You should now be able to run the Data Pump export regardless of an existing dump file in the output directory with the same name. When you set REUSE_DUMPFILES to a value of y, if Data Pump finds a dump file with the same name, it overwrites the file.


Note
The default value for REUSE_DUMPFILES is n. The REUSE_DUMPFILES parameter is available only in Oracle Database 11g or higher.

335

CHAPTER 13 ■ DATA PUMP

Creating a Daily DDL File

Sometimes, in database environments, changes occur to database objects in unexpected ways. You may have a developer who somehow obtains the production user passwords and decides to make a change on the fly without telling anybody. Or a DBA may decide not to follow the standard release process and make a change to an object while troubleshooting an issue. These scenarios can be frustrating for production-support DBAs. Whenever there’s an issue, the first question raised is, “What changed?”

When you use Data Pump, it’s fairly simple to create a file that contains all the DDL to re-create every object in your database. You can instruct Data Pump to export or import just the metadata via the CONTENT=METADATA_ONLY option.

For instance, in a production environment you can set up a daily job to capture this DDL. If there is ever a question about what changed and when, you can go back and compare the DDL in the daily dump files to see exactly what changed and when.

Listed next is a simple shell script that first exports the metadata content from the database and then uses Data Pump import to create a DDL file from that export:

#!/bin/bash

# source OS variables

. /var/opt/oracle/oraset DWREP

#

DAY=$(date +%Y_%m_%d)

SID=DWREP

#---------------------------------------------------

# First create export dump file with metadata only

expdp darl/foo dumpfile=${SID}.${DAY}.dmp content=metadata_only \

directory=dwrep_dp full=y logfile=${SID}.${DAY}.log

#---------------------------------------------------

# Now create DDL file from the export dump file.

impdp darl/foo directory=dwrep_dp dumpfile=${SID}.${DAY}.dmp \

SQLFILE=${SID}.${DAY}.sql logfile=${SID}.${DAY}.sql.log

#

exit 0

This code listing depends on a database-directory object being created that points to where you want the daily dump file to be written. You may also want to set up another job that periodically deletes any files older than a certain amount of time.

Compressing Output

When you use Data Pump to create large files, you should consider compressing the output. As of Oracle Database 11
g
, the COMPRESSION parameter can be one of the following values: ALL, DATA_ONLY, METADATA_ONLY, or NONE. If you specify ALL, then both data and metadata are compressed in the output.

This example exports one table and compresses both the data and metadata in the output file: $ expdp dbauser/foo tables=locations directory=datapump \

dumpfile=compress.dmp compression=all

If you’re using Oracle Database 10
g
, then the COMPRESSION parameter only has the METADATA_ONLY

and NONE values.

336

CHAPTER 13 ■ DATA PUMP


Note
The ALL and DATA_ONLY options of the COMPRESS parameter require a license for the Oracle Advanced Compression option.

Encrypting Data

One potential security issue with Data Pump dump files is that anybody with operating-system access to the output file can search for strings in the file. On Linux/Unix systems, you can do this with the strings command:

$ strings inv.dmp | grep -i secret

Here’s the output for this particular dump file:

Secret Data<

top secret data<

corporate secret data<

This command allows you to view the contents of the dump file because the data is in regular text and not encrypted. If you require that the data be secured, you can use Data Pump’s encryption features.

Data Pump lets you easily encrypt the output of a dump file. This example uses the ENCRYPTION

parameter to secure all data and metadata in the output:

$ expdp darl/foo encryption=all directory=dp dumpfile=inv.dmp

For this command to work, your database must have an encryption wallet in place and open. See the
Oracle Advanced Security Guide
(available on OTN) for more details about how to create and open a wallet.


Note
The Data Pump ENCRYPTION parameter requires that you use the Enterprise Edition of Oracle Database 11g or higher and also requires a license for the Oracle Advanced Security option.

The ENCRYPTION parameter takes the following options:

• ALL

• DATA_ONLY

• ENCRYPTED_COLUMNS_ONLY

• METADATA_ONLY

• NONE

The ALL option enables encryption for both data and metadata. The DATA_ONLY option encrypts just the data. The ENCRYPTED_COLUMNS_ONLY option specifies that only columns encrypted in the database are written to the dump file in an encrypted format. The METADATA_ONLY option encrypts just metadata in the export file.

337

CHAPTER 13 ■ DATA PUMP

Monitoring Data Pump Jobs

When you have long-running Data Pump jobs, you should occasionally check the status of the job to make sure the job hasn’t failed, or it’s become suspended for some reason, and so on. There are several ways to monitor the status of Data Pump jobs:

• Screen output

• Data Pump log file

• Database alert log

• Querying the status table

• Querying data-dictionary views

• Interactive command-mode status

• Using operating-system utilities’ process status ps

The most obvious way to monitor a job is to view the status that Data Pump displays on the screen as the job is running. If you’ve disconnected from the command mode, then the status is no longer displayed on your screen. In this situation, you must use another technique to monitor a Data Pump job.

Data Pump Log File

By default, Data Pump generates a log file for every job. When you start a Data Pump job, it’s a good practice to name a log file that is specific for that job:

$ impdp darl/foo directory=dp dumpfile=archive.dmp logfile=archive.log This job creates a file named archive.log that is placed in the directory referenced in the database object DP. If you don’t explicitly name a log file, Data Pump import creates one named import.log, and Data Pump export creates one named export.log.


Note
The log file contains the same information you see displayed interactively on your screen when running a Data Pump job.

Data-Dictionary Views

A quick way to determine whether a Data Pump job is running is to check the DBA_DATAPUMP_JOBS view for anything running with a STATE that has an EXECUTING status:

select

job_name

,operation

,job_mode

,state

from dba_datapump_jobs;

338

CHAPTER 13 ■ DATA PUMP

Here’s some sample output:

JOB_NAME OPERATION JOB_MODE STATE

------------------------- -------------------- ---------- ---------------

SYS_IMPORT_TABLE_04 IMPORT TABLE EXECUTING

SYS_IMPORT_FULL_02 IMPORT FULL NOT RUNNING

You can also query the DBA_DATAPUMP_SESSIONS view for session information via the following query: select

sid

,serial#

,username

,process

,program

from v$session s,

dba_datapump_sessions d

where s.saddr = d.saddr;

Here’s some sample output showing that several Data Pump sessions are in use: SID SERIAL# USERNAME PROCESS PROGRAM

---------- ---------- -------------------- --------------- ----------------------

1049 6451 STAGING 11306 oracle@xengdb (DM00) 1058 33126 STAGING 11338 oracle@xengdb (DW01) 1048 50508 STAGING 11396 oracle@xengdb (DW02) Database Alert Log

If a job is taking much longer than you expected, look in the database alert log for any messages similar to this:

statement in resumable session 'SYS_IMPORT_SCHEMA_02.1' was suspended due to ORA-01652: unable to extend temp segment by 64 in tablespace REG_TBSP_3

This indicates that a Data Pump import job is suspended and is waiting for space to be added to the REG_TBSP_3 tablespace. After you add space to the tablespace, the Data Pump job automatically resumes processing. By default, a Data Pump job waits two hours for space to be added.


Note
In addition to writing to the alert log, for each Data Pump job, Oracle creates a trace file in the ADR_HOME/trace directory. This file contains information such as the session ID and when the job started. The trace file is named with the following format: _dm00_.trc.

Status Table

Every time you start a Data Pump job, a status table is automatically created in the account of the user running the Data Pump job. For export jobs, the table name depends on what type of export job you’re running. The table is named with the format SYS___NN, where OPERATION is either EXPORT or IMPORT. JOB_MODE can be FULL, SCHEMA, TABLE, TABLESPACE, and so on.

Here’s an example of querying the status table for particulars about a currently running job: 339

CHAPTER 13 ■ DATA PUMP

select

name

,object_name

,total_bytes/1024/1024 t_m_bytes

,job_mode

,state

,to_char(last_update, 'dd-mon-yy hh24:mi')

from SYS_IMPORT_TABLE_04

where state='EXECUTING';

Interactive Command-Mode Status

A quick way to verify that Data Pump is running a job is to attach in interactive command mode and issue a STATUS command. For example:

$ impdp staging/staging_xst attach=SYS_IMPORT_TABLE_04

Import> status

Here’s some sample output:

Job: SYS_IMPORT_TABLE_04

Operation: IMPORT

Mode: TABLE

State: EXECUTING

Bytes Processed: 0

Current Parallelism: 4

You should see a state of EXECUTING, which indicates that the job is actively running. Other items to inspect in the output are the number of objects processed and bytes processed. Those numbers should increase as the job progresses.

Operating-System Utilities

You can use the process status (ps) operating-system utility to display jobs running on the server. For example, you can search for master and worker processes as follows:

$ ps -ef | egrep 'ora_dm|ora_dw' | grep -v egrep

Here’s some sample output:

oracle 29871 717 5 08:26:39 ? 11:42 ora_dw01_STAGE

oracle 29848 717 0 08:26:33 ? 0:08 ora_dm00_STAGE

oracle 29979 717 0 08:27:09 ? 0:04 ora_dw02_STAGE

If you run this command multiple times, you should see the processing time (seventh column) increase for one or more of the jobs running. This is a good indicator that Data Pump is still executing and doing work.

340

CHAPTER 13 ■ DATA PUMP

Data Pump Legacy Mode

This feature is covered last in this chapter, but it’s quite useful especially if you’re an old-school DBA. As of Oracle Database 11
g
release 2, Data Pump allows you to use the old exp and imp utility parameters.

This is known as
legacy mode
. You don’t have to do anything special to use legacy mode Data Pump. As soon as Data Pump detects a legacy parameter, it attempts to process the parameter as if it were from the old exp/imp utilities. You can even mix and match old legacy parameters with newer parameters. For example:

$ expdp darl/foo consistent=y tables=inv directory=dk

In the output, Data Pump indicates that it has encountered legacy parameters and gives you the syntax for what it translated the legacy parameter to in Data Pump syntax. For the previous command, here’s the output from the Data Pump session that shows what the consistent=y parameter was translated into:

Legacy Mode Active due to the following parameters:

Legacy Mode Parameter: "consistent=TRUE" Location: Command Line, Replaced with:

"flashback_time=TO_TIMESTAMP('2010-07-01 08:10:20', 'YYYY-MM-DD HH24:MI:SS')"

This feature can be
extremely
handy, especially if you’re really familiar with the old legacy syntax and wonder how it’s implemented in Data Pump.

I recommend that you try to use the newer Data Pump syntax whenever possible. However, you may run into situations where you have legacy exp/imp jobs and want to continue running the scripts as they are without any modifications.

Other books

Gunner Kelly by Anthony Price
Spy Cat by Peg Kehret
Tentación by Alyson Noel
Up for Love in London by Willow. Bonaire
El vuelo del dragón by Anne McCaffrey
Slippage by Harlan Ellison
Her Ladyship's Girl by Anwyn Moyle