DATAPUMP





DATAPUMP:
-----------------


--> oracle would prefer you to use data pump technology because it offers more     sophisticated features.

  --> treantportatble tablespaces feature to transport large amounts of data quickly.


--> introduction to datapump technoogy

    -- server side infrastructure for fast data movement between oracle  databases in 10 onwards.
    -- enable you to decrease total export time by more than two orders of  magnitude in most dat intensive 
          export jobs.
    -- much of the higher speed comes from using parallelism to read and write  export dump files.
    -- data pumps consists of 2 components
    -- datapump export utility
    -- data pump import utility
    -- two data pump utilities through a pair of clients called  expdp and  impdp.
    -- the original import and export files are not compactible.
    -- clients perform data pump export and import by using data pump API.
    -- DBMS_DATA_PUMP package implements the API.
    -- writes data to disks on the server node, and this preocess runs indepent  of the session established by  
          the expdp client.

--> benifits of the data pump technology

    -- all dump and log and other files are created on the server by default.
    -- improved performance -- tranferrring huge amount of data
    -- ability to restart jobs -- manually stop and restart  jobs
    -- parrallel execution capabilities -- choose the number of active execution threads for a data pump   
          export and import job.
    -- ability to attach running jobs
    -- ability to estimate space requirements
    -- n/w mode operations
    -- remapping capabilities
    -- fine grained data import capabilities

--> data pump components
    -- dbms_datapump package
    -- dbms_meatdata package
    -- command line clients

--> data access methods
    -- direct path
    -- external tables

--> data pump files
    -- dump files
    -- log files
    -- sql files

--> data pump previleges
    -- exp_full_database
    -- imp_full_database

--> uses several process
    -- master process
    -- worker process
    -- shadow process
    -- client process

--> datapump export methods
    -- command line
    -- parameter file
    -- using interactive data pump export

--> data pump export modes
    -- full export mode
    -- schema mode
    -- tablespace mode
    -- table mode

--> data pump export paramters

    -- file and directory related parameters
        -- directory
        -- dumpfile
        -- fielsize
        -- parfile
        -- reuse_dumpfiles
        -- compression
            -- all
            -- data_only
            -- metadata_only
            -- none
    -- export mode ralted parameters
        -- full
        -- schemas
        -- tables
        -- tablespaces
        -- transport_tablespace
        -- transport_full_check
    -- export filtering parameters
        --content
            -- all
            -- data_only
            -- meatadata_only
        -- exclude and include
        -- remap_data
        -- data_options
        -- sample
        -- transportable
    -- enforcing encryption of the export data
        -- encryption
            -- all
            -- data_only
            -- encrypted_columns_only
            -- metadata_only
            -- none
        -- encrption_algorithm
        -- encrption_mode
        -- encryption_password
    -- estimated parameters
        -- estimate
        -- estimate_only
    -- n/w link parameter
        -- network_link
    -- the encryption paramter
        -- encryption_password
    -- job related paramters
        -- job_name
        -- status
        -- flashback_scn
        -- flashback_time
        -- parallel
        -- attach
    -- interactive mode export parameters
        -- add-file
        -- continue-client
        -- exit_client
        -- help
        -- kill_job
        -- parallel
        -- start_job
        -- status
        -- stop_job

--> data pump export examples

    $ expdp system/password@orcl



--> datapump import types and modes

    -- full database import
    -- import of a schema other than your owner
    -- import of a table that you don't own

--> datapump import paramters

    -- file and directory related parameters
        -- parfile
        -- directory
        -- dumpfile
        -- logfile
        -- nologfile
        -- sqlfile
    -- filtering parameters
        -- content
            -- all
            -- data_only
            -- metadata_only
        -- exclude and include
        -- table_exists_action
            -- skip
            -- append
            -- truncate
            -- replace
    -- job related paramters
        -- job_name
        -- status
        -- parallel
    -- import mode related parameters
        -- full
        -- table
        -- schemas
        -- transport_tablespaces
        -- transport_full_check
        -- transport_data_files
    -- remap parameters
        -- remap_table
        -- remap_schema
        -- remap_datafile
        -- remap_tablespace
        -- remap_data
        -- transportable
        -- data_options
    -- the transform parameter
        -- tranform
            -- tranform_name
            -- segment attributes
            -- storage
            -- oid
            -- pctspace
            -- value
            -- object_type
    -- the network_link parameter
        -- network_link
    -- flashback parameters
        -- flashback_time
    -- imteractive import parameters
        -- continue-client
        -- exit_client
        -- help
        -- kill_job
        -- parallel
        -- start_job
        -- status
        -- stop_job


--> monitoring a datapump job
    -- viewing datapump jobs
        -- dba_datapump_jobs
    -- viewing datapump session
        -- dba_datapump_sessions and v$sessions
    -- viewing job progress
        -- v$session_longops

--> using the data pump API

    -- u can use datapump api to write pl/sql scripts that export and import data.
    -- dbms_datapump
        -- staring a job
        -- monitoring a job
        -- detaching a job
        -- stopping from a job
        -- restarting a job

--> transportable tablespaces

    -- u can easy way to move large amount of data b/w databases effectively by simply moving datafiles
         from one data base to the other.

    -- it can be done in
        -- same platform
        -- differnet platforms

No comments:

Post a Comment