Search Articles

How to Enable/Disable a Scheduled Job in Oracle

It is not easy task to manually deal with too many jobs. So to overcome with this this scenario oracle database provides advanced job scheduling capabilities through Oracle Scheduler. The DBMS_SCHEDULER package provides a collection of scheduling functions and procedures that are callable from any PL/SQL program.

Using Scheduler, database administrators and application developers can easily control when and where various tasks take place in the database environment. These tasks can be time consuming and complicated, so using the Scheduler can help them to improve the management and planning of these tasks.


To disable a job that has been scheduled with dbms_scheduler, first you need to identify the job_name, job status and other related information.



To check the job status:
SQL> select job_name, owner, enabled from dba_scheduler_jobs;

SQL> select job_name, enabled from DBA_SCHEDULER_JOBS WHERE job_name = 'SCHEMA_MNTC_JOB';



To Disable a job:

SQL> execute dbms_scheduler.disable('owner.job');

SQL> exec dbms_scheduler.disable('SCHEMA_MNTC_JOB');

PL/SQL procedure successfully completed.



BEGIN

  DBMS_SCHEDULER.DISABLE('SCHEMA_MNTC_JOB');

END;

/



To enable job:
SQL> exec dbms_scheduler.enable('SCHEMA_MNTC_JOB');

PL/SQL procedure successfully completed.




BEGIN

  DBMS_SCHEDULER.ENABLE('SCHEMA_MNTC_JOB');

END;

/



Again you can check the job status using below query:
SQL> select job_name, enabled from DBA_SCHEDULER_JOBS WHERE job_name = ‘GATHER_STATS_JOB’;


Read more ...

How to Lock/Unlock Table Statistics in Oracle

There are a number of cases were you want to lock the table statistics for example, if you have a highly volatile tables or intermediate table, where the volume of data changes drastically over a relatively short period of time or if you want a table not be analyzed by automatic statistics job but analyze it later.

If table is highly volatile then locking the statistics prevent in execution plan from changing and thus helps in plan stability for some period of time. That is why many guys prefer to unlock the stats, gather the stats and finally lock the stats.


How to check if table stats is locked:
SQL> SELECT stattype_locked FROM dba_tab_statistics WHERE table_name = '&TABLE_NAME' and owner = '&TABLE_OWNER';



If you will try to gather locked table statics, you will get the below error:

SQL> EXEC dbms_stats.gather_table_stats(ownname => 'SANCS', tabname => 'ORDER' , estimate_percent => dbms_stats.auto_sample_size);

ERROR at line 1:
ORA-20005: object statistics are locked (stattype = ALL)
ORA-06512: at “SYS.DBMS_STATS”, line 10640
ORA-06512: at “SYS.DBMS_STATS”, line 10664
ORA-06512: at line 1


How to Lock Table Statistics?
SQL>exec dbms_stats.lock_table_stats('<schema>', '<Table>');

 Example:
exec dbms_stats.lock_table_stats('SANCS', 'ORDER');

PL/SQL procedure successfully completed.



How to Unlock Table Statistics?
SQL> exec dbms_stats.unlock_table_stats('<schema>', '<Table>');



Example:

SQL> exec dbms_stats.unlock_table_stats('SANCS', 'ORDER');

PL/SQL procedure successfully completed.




Read more ...

Script to Start Oracle Database Automatically on Linux

From Oracle 10gR2 onward RAC clusterware automatically start and stop the ASM and Oracle database instances and listeners, so the following procedures are not necessary. But for the single instance where the RAC is not being used, this script will allow you to automate the startup and shutdown of oracle databases on Linux automatically after server reboot.

Already both scripts are installed in $ORACLE_HOME/bin and are called dbstart and dbshut. However, these scripts are not executed automatically after you reboot your server. I will explain you how to configure this script so that Oracle Services can start automatically after Linux server reboot.


How to Configure Auto Startup Script?

Below are the changes you need to perform in order to automate this script.

STEP 1: First, you need to make sure that any database instances you want to autostart need to be set to “Y” in /etc/oratab file as shown below.

# This file is used by ORACLE utilities.  It is created by root.sh
# and updated by either Database Configuration Assistant while creating
# a database or ASM Configuration Assistant while creating ASM instance.
#
# Entries are of the form:
#   $ORACLE_SID:$ORACLE_HOME:<N|Y>:
#
# The first and second fields are the system identifier and home
# directory of the database respectively.  The third filed indicates
# to the dbstart utility that the database should , "Y", or should not,
# "N", be brought up at system boot time.
#
# Multiple entries with the same $ORACLE_SID are not allowed.
#
#
ora11g:/u01/app/oracle/product/11.2.0/dbhome_1:Y
ora12c:/u01/app/oracle/product/11.2.0/dbhome_1:Y

The /etc/oratab file is normally created by running the root.sh script at the end of the installation. If you don’t have the file, you can always add it to your system by creating it manually (with user root!).

STEP 2: Make entry of ORACLE_HOME and PATH in your . ~/.bash_profile


STEP 3: Next, we are going to create 2 scripts under home path as /home/oracle/scripts: ora_start.sh and ora_stop.sh.

These scripts will call dbstart and dbshut and it will also allow us to add some more actions, for example the start of the Enterprise Manager Database control or any other services you might have.
You can also create separate directory for this script.

$ su – oracle
$ vi /home/oracle/scripts/ora_start.sh

#!/bin/bash


# script to start the Oracle database, listener and dbconsole


. ~/.bash_profile


# start the listener and the database

$ORACLE_HOME/bin/dbstart $ORACLE_HOME

# start the Enterprise Manager db console

#$ORACLE_HOME/bin/emctl start dbconsole

exit 0


$ vi /home/oracle/scripts/ora_stop.sh
#!/bin/bash

# script to stop the Oracle database, listener and dbconsole


. ~/.bash_profile


# stop the Enterprise Manager db console

#$ORACLE_HOME/bin/emctl stop dbconsole

# stop the listener and the database

$ORACLE_HOME/bin/dbshut $ORACLE_HOME

exit 0


You can see that inside the scripts, we are calling the .bash_profile file of the user “oracle”. This is needed to set the ORACLE_HOME, PATH environment variable.



STEP 4: Give execute permission to the scripts:
$  chmod u+x ora_start.sh ora_stop.sh




STEP 5: We will now create a wrapper script that can be used to schedule as a service.
With user root, create a file called “oracle” under /etc/init.d.

$ vi /etc/init.d/oracle
#!/bin/bash
# chkconfig: 345 99 10
# description: Oracle auto start-stop script.

# Set ORA_OWNER to the user id of the owner of the
# Oracle database in ORA_HOME.

ORA_OWNER=oracle
RETVAL=0

case "$1" in
    'start')
        # Start the Oracle databases:
        # The following command assumes that the oracle login
        # will not prompt the user for any values
        su - $ORA_OWNER -c "/home/oracle/scripts/ora_start.sh"
        touch /var/lock/subsys/oracle
        ;;
    'stop')
        # Stop the Oracle databases:
        # The following command assumes that the oracle login
        # will not prompt the user for any values
        su - $ORA_OWNER -c "/home/oracle/scripts/ora_stop.sh"
        rm -f /var/lock/subsys/oracle
        ;;
    *)
        echo $"Usage: $0 {start|stop}"
        RETVAL=1
esac
exit $RETVAL



STEP 6: Grant below permission for this script.
$ chmod 750 /etc/init.d/oracle

Note: Add the oracle home paths in .bash_profile



STEP 7: To create a service of this script, run the following command:
$ chkconfig --add oracle



STEP 8: All done, check the script and database status by running “service oracle stop” or “service oracle start” from the command line.

$ service oracle stop
Oracle Enterprise Manager 11g Database Control Release 11.2.0.3.0
Copyright (c) 1996, 2011 Oracle Corporation. All rights reserved.
Stopping Oracle Enterprise Manager 11g Database Control …
… Stopped.
Processing Database instance “oratst”: log file /u01/app/oracle/product/11.2.0/db_1/shutdown.log

$ service oracle start
Processing Database instance “oratst”: log file /u01/app/oracle/product/11.2.0/db_1/startup.log
Oracle Enterprise Manager 11g Database Control Release 11.2.0.3.0
Copyright (c) 1996, 2011 Oracle Corporation. All rights reserved.
Starting Oracle Enterprise Manager 11g Database Control …… started.
After this, it’s time for the final test: reboot your server and check if your Oracle database is automatically started after the reboot.


Whenever database server will reboot, you will observe that your configured databases has been started automatically. There is no need to start and stop the database manually. If you are getting any issue you can contact for support. We will be pleased to assist you.

Read more ...

How to Gather Statistics on Large Partitioned Tables in Oracle

It is difficult to gather stats on large partition tables which is huge in size, specially in core domain like telecom sectors where customers has to maintain call detail records in a partitioned table which are very big and huge in size.

For such tables we use to gather statistics of one partition which we can call it as a source partition and copy that stats to rest of the partition which we can call it as destination partition.

For Example: If you have 366 partitions then you can gather stats for anyone partition say P185 and now copy stats to rest of the partition. please note that, choose the partition where you have data in that partition. There is no need to gather stats for all the partition because oracle internally distribute the data based on the partitioned key.


STEPS TO MAINTAIN STATISTICS ON LARGE PARTITION TABLES



STEP 1: Gather stats for any one partition say P185.

EXEC dbms_stats.gather_table_stats(ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME' , PARTNAME => 'P185', estimate_percent => 10, method_opt=> 'for all indexed columns size skewonly', granularity => 'ALL', degree => 8 ,cascade => true );

Note: Change table_name and table_owner. You can increase degree if free parallel servers are available.


STEP 2: Generate script for rest of the remaining partition like shown below. Your source partition will be P185 and destination partition will be rest of the remaining partitions.


STEP 3: After gather statistics you can lock the stats. Using below format you can generate the script for all the partitions after making necessary changes.


exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P001', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P001');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P002', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P002');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P003', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P003');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P004', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P004');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P005', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P005');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P006', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P006');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P007', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P007');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P008', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P008');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P009', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P009');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P010', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P010');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P011', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P011');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P012', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P012');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P013', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P013');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P014', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P014');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P015', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P015');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P016', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P016');

exec DBMS_STATS.COPY_TABLE_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', srcpartname => 'P185', dstpartname => 'P017', force => TRUE);

exec DBMS_STATS.LOCK_PARTITION_STATS( ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', partname => 'P017');


Please feel free to contact for support in case of any difficulties. We will be pleased to provide support for your queries. 

Read more ...

Oracle Expdp/Impdp - Datapump Interview Questions/FAQs

Q1. How to export only ddl/metadata of a table?
Ans: you can use CONTENT=METADATA_ONLY parameter during export.


Q2: Which memory area used by datapump process?
Ans:  streams_pool_size. If streams_pool_size is zero 0 then probably you will get memory related error. Please check this parameter and set minimum to 96M value.

show parameter STREAMS_POOL_SIZE

NAME                                 TYPE        VALUE
------------------------------------ ----------- -----------------
streams_pool_size                    big integer 96M



Q3: How to improve datapump performance so that export/import happens faster?
Ans:  
  • Allocate streams_pool_size memory. Below query will give you the recommended settings of this parameter.
 select 'ALTER SYSTEM SET STREAMS_POOL_SIZE='||                  (max(to_number(trim(c.ksppstvl)))+67108864)||' SCOPE=SPFILE;'
from sys.x$ksppi a, sys.x$ksppcv b, sys.x$ksppsv c
where a.indx = b.indx and a.indx = c.indx and lower(a.ksppinm) in ('__streams_pool_size','streams_pool_size');

ALTER SYSTEM SET STREAMS_POOL_SIZE=XXXX MB SCOPE=SPFILE;

  • Use CLUSTER=N : In a RAC environment it can improve the speed of Data Pump API based operations.
  • Set PARALLEL_FORCE_LOCAL to a value of TRUE since PARALLEL_FORCE_LOCAL could have a wider scope of effect than just Data Pump API based operations.
  • EXCLUDE=STATISTICS:  excluding the generation and export of statistics at export time will shorten the time needed to perform any export operation. The DBMS_STATS.GATHER_DATABASE_STATS procedure would then be used at the target database once the import operation was completed.
  • Use PARALLEL : If there is more than one CPU available and the environment is not already CPU bound or disk I/O bound or memory bound and multiple dump files are going be used (ideally on different spindles) in the DUMPFILE parameter, then parallelism has the greatest potential of being used to positive effect, performance wise.
Q4: How to monitor status of export/import - datapump operations/jobs?
Ans: From dba_datapump_jobs you can easily monitor the status. You can use the below query.


set linesize 200
set pagesize 200
col owner_name format a12
col job_name format a20
col operation format a12
col job_mode format a20
SELECT 
owner_name, 
job_name, 
operation, 
job_mode, 
state 
FROM 
dba_datapump_jobs
where 
state='EXECUTING';

SELECT   w.sid, w.event, w.seconds_in_wait
FROM   V$SESSION s, DBA_DATAPUMP_SESSIONS d, V$SESSION_WAIT w
WHERE   s.saddr = d.saddr AND s.sid = w.sid;

SELECT 
OPNAME, 
SID, 
SERIAL#, 
CONTEXT, 
SOFAR, 
TOTALWORK,
    ROUND(SOFAR/TOTALWORK*100,2) "%_COMPLETE"
FROM 
V$SESSION_LONGOPS
WHERE 
OPNAME in
(
select 
d.job_name
from 
v$session s, 
v$process p, 
dba_datapump_sessions d
where 
p.addr=s.paddr 
and 
s.saddr=d.saddr
)
AND 
OPNAME NOT LIKE '%aggregate%'
AND 
TOTALWORK != 0
AND 
SOFAR <> TOTALWORK;


Q5: How to stop/start/kill datapump jobs?
Ans: expdp / as sysdba attach=job_name
export>status
export>stop_job
export>start_jop
export>kill_job

You can also kill jobs from alter system kill session command. SID and SERIAL# you will get from the above command.

alter system kill session 'SID,SERIAL#' immediate;


Q6: How will you take consistent export backup? What is the use of flashback_scn ?
Ans:  To take a consistent export backup you can use the below method:

SQL:  
           select to_char(current_scn) from v$database;

Expdp parfile content:
---------------------------

directory=OH_EXP_DIR 
dumpfile=exporahow_4apr_<yyyymmdd>.dmp 
logfile=exporahow_4apr_<yyyymmdd>.log 
schemas=ORAHOW 
flashback_scn=<<current_scn>>


Q7: How to drop constraints before import?
Ans: 

set feedback off;
spool /oradba/orahow/drop_constraints.sql;

select 'alter table SCHEMA_NAME.' || table_name || ' drop constraint ' || constraint_name || ' cascade;'
from dba_constraints where owner = 'SCHEMA_NAME'
  and not (constraint_type = 'C')
  order by table_name,constraint_name;

  Spool off;
  exit;

Q8: I exported dumpfile of metadata/ddl only from production but during import in test machine it is consuming huge size and probably we don't have that much available disk space? What could be the reason that only ddl is consuming huge space?
Ans: Below are the snippet ddl of one table extracted from prod. As you can see that during table creation oracle always allocate the initial bytes as shown below. 

PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
  STORAGE(INITIAL 1342177280 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)

As you can see above, oracle allocating 128MB for one table initially even if row count is zero.
To avoid this you need to set deferred_segment_creation parameter value to true. By default it is false.


Q9: If you don't have sufficient disk space on the database server, how will take the export? OR, How to export without dumpfile?
Ans: You can use network link/ DB Link for export.  You can use network_link by following these simple steps:

Create a TNS entry for the remote database in your tnsnames.ora file
Test with tnsping sid
Create a database link to the remote database
Specify the database link as network_link in your expdp or impdp syntax

Q10: Tell me some of the parameters you have used during export?
Ans:

CONTENT:         Specifies data to unload where the valid keywords are:
                             (ALL), DATA_ONLY, and METADATA_ONLY.
DIRECTORY       Directory object to be used for dumpfiles and logfiles.
DUMPFILE         List of destination dump files (expdat.dmp),
                             e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
ESTIMATE_ONLY         Calculate job estimates without performing the export.
EXCLUDE               Exclude specific object types, e.g. EXCLUDE=TABLE:EMP.
FILESIZE              Specify the size of each dumpfile in units of bytes.
FLASHBACK_SCN         SCN used to set session snapshot back to.
FULL                  Export entire database (N).
HELP                  Display Help messages (N).
INCLUDE               Include specific object types, e.g. INCLUDE=TABLE_DATA.
JOB_NAME              Name of export job to create.
LOGFILE               Log file name (export.log).
NETWORK_LINK          Name of remote database link to the source system.
NOLOGFILE             Do not write logfile (N).
PARALLEL              Change the number of active workers for current job.
PARFILE               Specify parameter file.
QUERY                 Predicate clause used to export a subset of a table.
SCHEMAS               List of schemas to export (login schema).
TABLES                Identifies a list of tables to export - one schema only.
TRANSPORT_TABLESPACES List of tablespaces from which metadata will be unloaded.
VERSION               Version of objects to export where valid keywords are:
                      (COMPATIBLE), LATEST, or any valid database version.


Q11: You are getting undo tablespace error during import, how you will avoid it?
Ans: We can use COMMIT=Y option


Q12: Can we import a 11g dumpfile into 10g database using datapump? 
Ans: Yes we can import from 11g to 10g using VERSION option.
Read more ...

How to Find and Remove Table Fragmentation in Oracle Database

Introduction

In this article I will discuss about table fragmentation, which cause slowness and a wastage of space. Fragmentation is a common issue in oracle database which occurs due to excessive dml operations like insert followed by update and delete operations.


Brief Explanation:

When rows are inserted into the table, the high watermark(used blocks) of the table moved forward in order to accommodate new rows into the table. But during delete operation, oracle doesn't allow high watermark to shift backward in order to decrease table size and release the free space.

Ideally once we delete the data from the table, the free space should be released or reused but additionally oracle acquire new blocks to accommodate the new rows for insertion which causes hole into the table.

As you can see below, table contains data ranging from low watermark to high watermark which we call it as a used blocks and last two are free blocks. When rows are inserted into the table oracle moves the high water mark to acquire free blocks to accommodate new rows into the table marking free blocks as used.

But after delete operation oracle doesn't shift high water mark backward to release the free space as shown in Pic :1B.

In pic 1C, we can see that free blocks got released after de-fragmentation activity.



How to remove table fragmentation in oracle?
There are different ways to remove fragmentation like table export/import, moving tables to same or different tablespace and table recreation. But here we will discuss most easy and common ways of removing fragmentation.


Steps To Check and Remove Fragmentation:

STEP 1: First Gather table statistics
In order to find the exact difference between the total size and actual size of the table from dba_segments and dba_tables views. You can also check the last analysed date of a table. If table has recently analysed and stale_stats is no then you can skip this step.

select table_name,last_analyzed,stale_stats from user_tab_statistics where table_name='&TABLE_NAME';
EXEC dbms_stats.gather_table_stats(ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', method_opt=> 'for all indexed columns size skewonly', granularity => 'ALL', degree => 8 ,cascade => true,estimate_percent => 15);


Step 2: Check table size from dba_segments
select sum(bytes)/1024/1024/1024 from dba_segments where segment_name='&TABLE_NAME';


STEP 3: Check actual table size, fragmented size and percentage of fragmentation in a table.
select table_name,avg_row_len,round(((blocks*16/1024)),2)||'MB' "TOTAL_SIZE",
round((num_rows*avg_row_len/1024/1024),2)||'Mb' "ACTUAL_SIZE",
round(((blocks*16/1024)-(num_rows*avg_row_len/1024/1024)),2) ||'MB' "FRAGMENTED_SPACE",
(round(((blocks*16/1024)-(num_rows*avg_row_len/1024/1024)),2)/round(((blocks*16/1024)),2))*100 "percentage"
from all_tables WHERE table_name='&TABLE_NAME';

Note: If you find more than 20% fragmentation then you can proceed for de-fragmentation. You can also de-fragment based on size you are going to reclaim from the above mentioned  query.


STEP 4:  Check the indexes on the table
select index_name from dba_indexes where table_name='&TABLE_NAME';


STEP 5: Remove fragmentation by moving tables to same tablespace. You can also move tables to different tablespace.
alter table <table_name> move;  
For moving tables into different tablespace
alter table <table_name> enable row movement;
alter table <table_name> move tablespace <new_tablespace>


STEP 6: Rebuild the indexes.
alter index index_name rebuild online;


Step 7: Gather the statistics:
EXEC dbms_stats.gather_table_stats(ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', method_opt=> 'for all indexed columns size skewonly', granularity => 'ALL', degree => 8 ,cascade => true,estimate_percent => 15);


STEP 8: Finally check the table size and percentage of fragmentation:
select sum(bytes)/1024/1024/1024 from dba_segments where segment_name='&TABLE_NAME';

select table_name,avg_row_len,round(((blocks*16/1024)),2)||'MB' "TOTAL_SIZE",
round((num_rows*avg_row_len/1024/1024),2)||'Mb' "ACTUAL_SIZE",
round(((blocks*16/1024)-(num_rows*avg_row_len/1024/1024)),2) ||'MB' "FRAGMENTED_SPACE",
(round(((blocks*16/1024)-(num_rows*avg_row_len/1024/1024)),2)/round(((blocks*16/1024)),2))*100 "percentage"
from all_tables WHERE table_name='&TABLE_NAME';


Please note that the above mentioned steps is only for non-partitioned tables. For partitioned tables you can move table partition by partition or the easiest way is to recreate the table.


Read more ...

CONTACT

Name

Email *

Message *