Search Articles

Script to Monitor RMAN Backup Status and Timings

Being a DBA, you often asked to check the status of  RMAN backup job details. There are many ways to monitor the progress of the backup but you can use the below scripts to monitor the status of the RMAN job like full, incremental & archivelog backups.

You can use views like v$rman_backup_job_details and V$SESSION_LONGOPS to monitor the current executing RMAN jobs and the status of the previously completed backups.


RMAN STATUS

SQL> desc v$rman_backup_job_details
 Name                                 Null?        Type
 ----------------------------- -------- --------------------
 SESSION_KEY                              NUMBER
 SESSION_RECID                          NUMBER
 SESSION_STAMP                         NUMBER
 COMMAND_ID                             VARCHAR2(33)
 START_TIME                                DATE
 END_TIME                                    DATE
 INPUT_BYTES                             NUMBER
 OUTPUT_BYTES                         NUMBER
 STATUS_WEIGHT                       NUMBER
 OPTIMIZED_WEIGHT                  NUMBER
 OBJECT_TYPE_WEIGHT                    NUMBER
 OUTPUT_DEVICE_TYPE                     VARCHAR2(17)
 AUTOBACKUP_COUNT                       NUMBER
 BACKED_BY_OSB                          VARCHAR2(9)
 AUTOBACKUP_DONE                        VARCHAR2(9)
 STATUS                                 VARCHAR2(69)
 INPUT_TYPE                             VARCHAR2(39)
 OPTIMIZED                              VARCHAR2(9)
 ELAPSED_SECONDS                        NUMBER
 COMPRESSION_RATIO                      NUMBER
 INPUT_BYTES_PER_SEC                    NUMBER
 OUTPUT_BYTES_PER_SEC                   NUMBER
 INPUT_BYTES_DISPLAY                    VARCHAR2(4000)
 OUTPUT_BYTES_DISPLAY                   VARCHAR2(4000)
 INPUT_BYTES_PER_SEC_DISPLAY            VARCHAR2(4000)
 OUTPUT_BYTES_PER_SEC_DISPLAY           VARCHAR2(4000)
 TIME_TAKEN_DISPLAY                     VARCHAR2(4000)


This script will report status of current as well as completed backup details like full, incremental and archivelog backups:

col STATUS format a9
col hrs format 999.99
select
SESSION_KEY, INPUT_TYPE, STATUS,
to_char(START_TIME,'mm/dd/yy hh24:mi') start_time,
to_char(END_TIME,'mm/dd/yy hh24:mi')   end_time,
elapsed_seconds/3600                   hrs
from V$RMAN_BACKUP_JOB_DETAILS
order by session_key;


SESSION_KEY INPUT_TYPE    STATUS    START_TIME     END_TIME           HRS
----------- ------------- --------- -------------- -------------- -------
        585 ARCHIVELOG    COMPLETED 01/08/15 06:00 01/08/15 06:02     .03
        591 ARCHIVELOG    COMPLETED 01/08/15 12:00 01/08/15 12:01     .02
        596 ARCHIVELOG    COMPLETED 01/08/15 18:01 01/08/15 18:02     .03
        601 DB INCR       FAILED    01/08/15 20:00 01/09/15 01:47    5.79
        603 ARCHIVELOG    COMPLETED 01/09/15 06:00 01/09/15 06:07     .12
        608 ARCHIVELOG    COMPLETED 01/09/15 12:00 01/09/15 12:09     .16
        613 ARCHIVELOG    COMPLETED 01/09/15 15:07 01/09/15 15:25     .29




Below script will report you the percentage of completion along with sid and serial#.

SELECT SID, SERIAL#, CONTEXT, SOFAR, TOTALWORK,
ROUND (SOFAR/TOTALWORK*100, 2) "% COMPLETE"
FROM V$SESSION_LONGOPS
WHERE OPNAME LIKE 'RMAN%' AND OPNAME NOT LIKE '%aggregate%'
AND TOTALWORK! = 0 AND SOFAR <> TOTALWORK;


 SID       SERIAL#    CONTEXT    SOFAR      TOTALWORK  %COMPLETE
---------- ---------- ---------- ---------- ---------- ----------
 22        31         1          8225460    18357770   45.21

Read more ...

Best Way to move all objects from one Tablespace to another in Oracle

Being a DBA, there are many occasions where you need to move all database objects from one tablespace to another to make them reorganized. This is because too many tablespaces consume lot of space and are difficult to manage and cause extra overhead to oracle. So in this situations you need to move tables, indexes and other database objects to the newly created tablesapce.

Recently as a part of maintenance activity, we observed that users tablespace was consuming 363 GB disk space but when we checked from dba_segments, the actual size of the objects was 3GB only. We tried to resize the datafiles but during resizing datafiles we got the below error.

ERROR at line 1:
ORA-03297: file contains used data beyond requested RESIZE value


Oracle throw this error because datafiles has reached to the high watermark and some of the objects might be residing to the end of the hwm and you cannot shrink the datafiles below the High Water Mark (HWM).

For example, suppose you created one tablespace of size 120 GB and created two tables inside that tablespace of size 60 GB each. Now the overall size of the tablesapce is 120 GB and we can say that tablespace has reached to the high watermark because it doesn't contain the data beyound HWM(120GB).

What will happen if you dropped table1 and then trying to resize the datafile?

Inspite of 60 GB free space inside the tablespace, the database doesn't allow you to resize the datafile because tablespace has reached to its maxsize and still contains data(table2) which is sitting somewhere near to the HWM and it will throw error "file contains used data beyond requested RESIZE value". 


<===============60GB===============120GB>
                  table1                         table2


<=================|=================120GB>
                                                        table2


As we can see in users tablespace actual size of the data is 3.5GB only but it is occupying 363 GB of disk space. When we tried to resize the datafiles, it throws the ORA-03297 error because of HWM. To overcome this you must move all objects from users tablesapce to new tablesapce.



move objects to different tablesapce in oracle



 You can move all objects to different tablespace in many ways:  for example, using the alter table move command or the dbms_redefinition package. But the safest and the easy ways to do so is to use the remap_tablespace using expdp. If tablespace contains clustered objects then you cannot use alter table move command or any other scripts. So the only one option to do so is to use expdp.


Steps to Move objects into different tablespace using EXPDP:

STEP1: Create directory for export dumpfile:
SQL> create or replace directory test_dir as '/data/oracle';
Directory created.


STEP2: Grant read, write on the newly created directory.
SQL> grant read, write on directory test_dir to username;
Grant succeeded.


STES 3: Export all tablesapce objects using expdp.
nohup expdp \"/ as sysdba\" DIRECTORY=test_dir DUMPFILE=users.dmp LOGFILE=users.log TABLESPACES=USERS &


STEP 4: Import objects to the newly created tablespace using remap_tablespace.
Please note that, you must use table_exists_action=replace otherwise database willl throw error: object already exists and skipped because of default table_exists_action of skip.
nohup impdp \"/ as sysdba\" DIRECTORY=test_dir DUMPFILE=users.dmp table_exists_action=replace remap_tablespace=USERS:MTNGB1 LOGFILE=users.log &

Finally verify the objects in both the tablesapce and drop the tablespace which was consuming huge space.

Read more ...

How to Fix AHM not Advancing to Last Good Epoch in Vertica

Vertica advances the AHM at an interval of 5 minutes to be equal with Last Good Epoch - LGE. Because of unrefreshed projections due to some reason, the AHM does not advance. The AHM is never greater than the LGE. 

vertica


Definition and concepts you must know before troubleshooting AHM lagging issue:


Ancient History Mark -AHM:
The ancient history mark - AHM is the epoch prior to which historical data can be purged from physical storage.


Epoch:
An epoch is 64-bit number that represents a logical time stamp for the data in Vertica. Every row has an implicitly stored column that records the committed epoch.

The epoch advances when the data is committed with a DML operation (INSERT, UPDATE, MERGE, COPY, or DELETE). The EPOCHS system table contains the date and time of each closed epoch and the corresponding epoch number of the closed epoch.

Using below query, you can check which time periods pertain to which epochs:
oradmin=> SELECT * FROM epochs;



Current Epoch (CE):
The current epoch is the open epoch that becomes the last epoch (LE) after a COMMIT operation. The current_epoch at the time of the COMMIT is the epoch for that DML.
oradmin=> SELECT CURRENT_EPOCH FROM SYSTEM;

CURRENT_EPOCH
---------------
629415



Latest Epoch (LE):
The latest epoch is the most recently closed epoch. The current epoch after the COMMIT operation becomes the latest epoch.


Checkpoint Epoch (CPE):
The checkpoint epoch per projection is the latest epoch for which there is no data in the WOS. It is the point at which the projection can be recovered. The Tuple Mover moveout operation advances the projection CPE while moving the data from WOS to the ROS. You can see the projection checkpoint epochs in the PROJECTION_CHECKPOINT_EPOCHS system table.


Last Good Epoch (LGE):
The minimum checkpoint epoch across all the nodes is known as the last good epoch. The last good epoch refers to the most recent epoch that can be recovered in a manual recovery. The LGE consists of a snapshot of all the data on the disk. If the cluster shuts down abnormally, the data after the LGE is lost.
The Tuple Mover advances the CPE and sets a new LGE. If the Tuple Mover fails, the data does not move from the WOS to the ROS. Hence, the data does not advance the CPE and the LGE.


To see the cluster last good epoch, you can use the following command:
oradmin=> SELECT GET_LAST_GOOD_EPOCH();



Last Good Epoch Does Not Advance
There are certain situations when the last good epoch does not advance. If the LGE advances, you see the following result. When the Tuple Mover moves the data from the WOS to the ROS, the LGE advances:

oradmin=>SELECT CURRENT_EPOCH, LAST_GOOD_EPOCH FROM SYSTEM ;

 CURRENT_EPOCH   |   LAST_GOOD_EPOCH

---------------+-----------------

        731384   |       721381



If you do not see the LGE advance, check if there is data in the WOS:
oradmin=>SELECT sum(wos_used_bytes) from projection_storage ;


If there is data in the WOS, force a moveout operation:
oradmin=> SELECT do_tm_task('moveout');


Ancient History Mark Does Not Advance
If the difference b/w the Last good epoch(LGE) and Ancient History mark(AHM) is huge then you need to ensure that there is not much difference as it takes some hours to recover the data.


You can check the LGE and AHM difference using below query:
oradmin=> select get_current_epoch(),get_last_good_epoch(),get_ahm_epoch(),(get_current_epoch()- get_last_good_epoch()) LGECEDiff,(get_last_good_epoch()-get_ahm_epoch()) LGEAHMDiff, get_ahm_time();


oradmin=> select get_current_epoch() CE,get_last_good_epoch() LGE,get_ahm_epoch () AHM,(get_current_epoch()- get_last_good_epoch()) CeLGDiff,(get_last_good_epoch()-get_ahm_epoch())LgeAHmDiff,get_expected_recovery_epoch();



INFO 4544:  Recovery Epoch Computation:

Node Dependencies:
011 - cnt: 9448
101 - cnt: 9448
110 - cnt: 9448
111 - cnt: 5
Nodes certainly in the cluster:

        Node 1(v_usagedb_node0002), epoch 1164674

        Node 2(v_usagedb_node0003), epoch 1164674

Filling more nodes to satisfy node dependencies:

Data dependencies fulfilled, remaining nodes LGEs don't matter:

        Node 0(v_usagedb_node0001), epoch 1164669

   CE    |   LGE   |  AHM   | CeLGDiff | LgeAHmDiff | get_expected_recovery_epoch

---------+---------+--------+----------+------------+-----------------------------

 1164675 | 1164669 | 797307 |        6 |     367362 |                     1164674



To sync AHM with LGE, execute the below command:
oradmin=> SELECT MAKE_AHM_NOW();

The above command performs the below operations:
  • Advances the epoch
  • Performs a moveout operation on all projections.
Read more ...

How to Check the Size of Tables in Vertica

Vertica store table data in compressed format. To get the size of a table in vertica, you can use the below query. By using column_storage and projection_storage system tables you will get the size of table compressed format. You can check the all the column definition from the official vertica sites using below link.

v_monitor.projection_storage


COLUMN DEFINITION:
----------------------------------


ANCHOR_TABLE_NAME: VARCHAR
The associated table name for which information is listed.


ANCHOR_TABLE_SCHEMA: VARCHAR
The associated table schema for which information is listed.

USED_BYTES: INTEGER
The number of bytes of disk storage used by the projection.



SELECT anchor_table_schema,
anchor_table_name,
SUM(used_bytes) / (1024/1024/1024/1024) AS TABLE_SIZE_GB
FROM   v_monitor.projection_storage
GROUP  BY anchor_table_schema,
anchor_table_name
order  by sum(used_bytes) desc;


To find number of rows and bytes occupied by each table in the database
--------------------------------------------------------------------------------------------
SELECT t.table_name AS table_name,
SUM(ps.wos_row_count + ps.ros_row_count) AS row_count,
SUM(ps.wos_used_bytes + ps.ros_used_bytes) AS byte_count
FROM tables t
JOIN projections p ON t.table_id = p.anchor_table_id
JOIN projection_storage ps on p.projection_name = ps.projection_name
WHERE (ps.wos_used_bytes + ps.ros_used_bytes) > 500000
--and t.table_name='table_name'
GROUP BY t.table_name
ORDER BY byte_count DESC;


To find the size of single table in the database:
-----------------------------------------------------------
SELECT ANCHOR_TABLE_NAME,PROJECTION_SCHEMA,((SUM(USED_BYTES))/1024/1024/1024)  AS TOTAL_SIZE FROM PROJECTION_STORAGE WHERE ANCHOR_TABLE_NAME ='TABLE_NAME' AND ANCHOR_TABLE_SCHEMA='TABLE_SCHEMA' AND PROJECTION_NAME like '&PROJECTION_NAME' GROUP BY PROJECTION_SCHEMA, ANCHOR_TABLE_NAME;



SELECT anchor_table_schema,
anchor_table_name,
SUM(used_bytes) / ( 1024/1024/1024 ) AS TABLE_SIZE_GB
FROM   v_monitor.column_storage
GROUP  BY anchor_table_schema,
anchor_table_name
order  by sum(used_bytes) desc;

Read more ...

CONTACT

Name

Email *

Message *