Search Articles

ORA-1652: unable to extend temp segment by 128 in tablespace TEMP

ORA-1652: unable to extend temp segment by %s in tablespace TEMP


This error occur when user running the queries doesn't get the enough free space in the temporary tablespace to complete the task.

ora-1652

Usually SMON automatically clean up the unused temporary segments so increasing the temporary tablespace is not a good idea in production environment. If this issue is happening on the regular basis then you need to analyse the temporary tablespace and the sessions along with the queries holding the temp space.

It's better if you can analyse and tune the queries. Below are some of the troubleshooting steps using which you can find the root cause.



Query To Check TEMP Tablespace Usage:
SELECT A.tablespace_name tablespace, D.mb_total,
   SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_used,
   D.mb_total - SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_free
  FROM v$sort_segment A,
   (
  SELECT B.name, C.block_size, SUM (C.bytes) / 1024 / 1024 mb_total
   FROM v$tablespace B, v$tempfile C
    WHERE B.ts#= C.ts#
     GROUP BY B.name, C.block_size) D
   WHERE A.tablespace_name = D.name
   GROUP by A.tablespace_name, D.mb_total;

TABLESPACE                          MB_TOTAL      MB_USED      MB_FREE
------------------------------- ------------ ------------ ----------------------
TEMP                                  13107122       1293301         17411



The following query will display the information about each statement using space in a sort segment.
This will also including information about  the database sessions that issued the statement and the temporary tablespace and amount of sort space being used.

SELECT   S.sid || ',' || S.serial# sid_serial, S.username,T.blocks * TBS.block_size / 1024 / 1024 mb_used, T.tablespace,
T.sqladdr address, Q.hash_value, Q.sql_text
FROM     v$sort_usage T, v$session S, v$sqlarea Q, dba_tablespaces TBS
WHERE    T.session_addr = S.saddr
AND      T.sqladdr = Q.address
AND      T.tablespace = TBS.tablespace_name
ORDER BY S.sid;

Run the above query at regular intervals to know which statement is chewing up temp space.

To add datafiles to the temp tablespace:
select file_name,tablespace_name,bytes/1024/1024/1024, maxbytes/1024/1024/1204,autoextensible from dba_temp_files;

 ALTER TABLESPACE TEMP ADD TEMPFILE 'LOCATION' size 100m reuse autoextend on next 50m maxsize 20G


To avoid the ORA-1652 error, you can configure the temp tablespace usaage alert and monitor the sessions/queries causing the temp issues.
Read more ...

Queries to Monitor Expdp Datapump Jobs Status

Being a DBA, you often asked to perform and monitor expdp/impdp - datapump jobs on regular basis. In this post you will get varieties of queries using which you can start, stop, resume, kill and see the status of data pump jobs.

When the export or import job is in progress, you can press +C keys to get to the respective datapump  prompt or you can attach to the running job and then issue the STATUS command:

monitor-datapump-status


To monitor executing jobs using dba_datapump_jobs view:
set linesize 200
set pagesize 200
col owner_name format a12
col job_name format a20
col operation format a12
col job_mode format a20
SELECT 
owner_name, 
job_name, 
operation, 
job_mode, 
state 
FROM 
dba_datapump_jobs
where 
state='EXECUTING';



To get the detail information like SID, Serial#, and % of completion:
SELECT 
OPNAME, 
SID, 
SERIAL#, 
CONTEXT, 
SOFAR, 
TOTALWORK,
    ROUND(SOFAR/TOTALWORK*100,2) "%_COMPLETE"
FROM 
V$SESSION_LONGOPS
WHERE 
OPNAME in
(
select 
d.job_name
from 
v$session s, 
v$process p, 
dba_datapump_sessions d
where 
p.addr=s.paddr 
and 
s.saddr=d.saddr
)
AND 
OPNAME NOT LIKE '%aggregate%'
AND 
TOTALWORK != 0
AND 
SOFAR <> TOTALWORK;



To check the waiting status and wait event of the job waiting for:
SELECT   w.sid, w.event, w.seconds_in_wait
   FROM   V$SESSION s, DBA_DATAPUMP_SESSIONS d, V$SESSION_WAIT w
    WHERE   s.saddr = d.saddr AND s.sid = w.sid;



To check event event and wait class for a particular SID:
SQL> select COMMAND,STATE,WAIT_CLASS,EVENT,SECONDS_IN_WAIT from v$session where sid=7248 and SERIAL#=56639;



To monitor and perform various operations from the expdp/impdp prompt:
[oracle@orahow ~]$ expdp attach=Job_name
expdp / as sysdba attach=job_name
export>status
export>stop_job
export>start_jop
export>kill_job



To check the orphaned datapump jobs. For orphaned jobs the state will be NOT RUNNING.
SET lines 140
COL owner_name FORMAT a10;
COL job_name FORMAT a20
COL state FORMAT a12 
COL operation LIKE owner_name
COL job_mode LIKE owner_name
SELECT owner_name, job_name, operation, job_mode,
state, attached_sessions
FROM dba_datapump_jobs;



To check the alert log and query the DBA_RESUMABLE view.
select name, sql_text, error_msg from dba_resumable;



To kill the datapump jobs:
alter system kill session 'SID,SERIAL#' immediate;


That's all...if you have any suggestions or any other queries to monitor datapump jobs status, please comment us. We will definitely go through it and will include in this article.
Read more ...

How to Find Users having DBA Role in Oracle

The default DBA role is automatically created during Oracle Database installation. This role contains most database system privileges. Therefore, the DBA role should be granted only to actual database administrators. If you want to know which users have been granted the dba role then you need to query the dba_role_privs in the SYS schema.

USERS_HAVING_DBA_ROLES

The DBA role does not include the SYSDBA or SYSOPER system privileges. These are special administrative privileges that allow an administrator to perform basic database administration tasks, such as creating the database and instance startup and shutdown.

This role tells you the grantee, granted_role, whether they have admin option granted, and whether the role is their default role:


DBA_ROLE_PRIVS describes the roles granted to all users and roles in the database.

USER_ROLE_PRIVS describes the roles granted to the current user.



SQL> desc dba_role_privs
Name         Null?    Type
------------ -------- ------------
GRANTEE               VARCHAR2(30)
GRANTED_ROLE NOT NULL VARCHAR2(30)
ADMIN_OPTION          VARCHAR2(3)
DEFAULT_ROLE          VARCHAR2(3)



GRANTEE:               Name of the user or role receiving the grant
-----------------
GRANTED_ROLE:   Granted role name
--------------------------
ADMIN_OPTION:   Indicates whether the grant was with the ADMIN OPTION (YES) or not(NO)
-------------------------
DEFAULT_ROLE:   Indicates whether the role is designated as a DEFAULT ROLE for the user (YES) or not (NO)



Using below query you can find users having DBA privileges
SQL> select * from dba_role_privs where granted_role='DBA';
GRANTEE   GRANTED_ROLE ADM DEF
--------- ------------ --- ---
SYS       DBA          YES YES
SYSTEM    DBA          YES YES


There are many situations arises where you wanted to revoke the DBA roles granted to the user for the security reasons. The above query will help you to find the users having DBA privileges. 
Read more ...

dbca UnsatisfiedLinkError exception loading native library java.lang.UnsatisfiedLinkError

Getting below error while installing oracle 11g/12c database using DBCA. 

UnsatisfiedLinkError exception loading native library: njni11
java.lang.UnsatisfiedLinkError: /home/u01/app/oracle/product/11.2.0/db_1/lib/libnjni11.so: libclntsh.so.11.1: cannot open shared object file: No such file or directory
Exception in thread "main" java.lang.UnsatisfiedLinkError: get
at oracle.net.common.NetGetEnv.get(Native Method)
at oracle.net.config.Config.getNetDir(Unknown Source)
at oracle.net.config.Config.initConfig(Unknown Source)
at oracle.net.config.Config.<init>(Unknown Source)
at oracle.sysman.assistants.util.NetworkUtils.<init>(NetworkUtils.java:269)
at oracle.sysman.assistants.util.step.StepContext.<init>(StepContext.java:317)
at oracle.sysman.assistants.dbca.backend.Host.<init>(Host.java:778)
at oracle.sysman.assistants.dbca.ui.UIHost.<init>(UIHost.java:257)
at oracle.sysman.assistants.dbca.ui.InteractiveHost.<init>(InteractiveHost.java:54)
at oracle.sysman.assistants.dbca.Dbca.getHost(Dbca.java:164)
at oracle.sysman.assistants.dbca.Dbca.execute(Dbca.java:112)
at oracle.sysman.assistants.dbca.Dbca.main(Dbca.java:184)


You will be able to install oracle binaries successfully but during during SID creation you will get the above error.


Reason:

The prerequisites have not been met.


Resolution:


  1. Check the proper oracle binary version. Oracle binaries should be downloaded for proper OS versions.
  2. sqlplus: error while loading shared libraries: libclntsh.so.11.1" : The prerequisites have not been met. Make sure the "gcc" package has been installed.


I will suggest you to download the suggested oracle rpm packages and start the installation again.



List of supported Oracle 11G RPM packages:

rpm -Uvh binutils-2*x86_64*
rpm -Uvh glibc-2*x86_64* nss-softokn-freebl-3*x86_64*
rpm -Uvh glibc-2*i686* nss-softokn-freebl-3*i686*
rpm -Uvh compat-libstdc++-33*x86_64*
rpm -Uvh glibc-common-2*x86_64*
rpm -Uvh glibc-devel-2*x86_64*
rpm -Uvh glibc-devel-2*i686*
rpm -Uvh glibc-headers-2*x86_64*
rpm -Uvh elfutils-libelf-0*x86_64*
rpm -Uvh elfutils-libelf-devel-0*x86_64*
rpm -Uvh gcc-4*x86_64*
rpm -Uvh gcc-c++-4*x86_64*
rpm -Uvh ksh-*x86_64*
rpm -Uvh libaio-0*x86_64*
rpm -Uvh libaio-devel-0*x86_64*
rpm -Uvh libaio-0*i686*
rpm -Uvh libaio-devel-0*i686*
rpm -Uvh libgcc-4*x86_64*
rpm -Uvh libgcc-4*i686*
rpm -Uvh libstdc++-4*x86_64*
rpm -Uvh libstdc++-4*i686*
rpm -Uvh libstdc++-devel-4*x86_64*
rpm -Uvh make-3.81*x86_64*
rpm -Uvh numactl-devel-2*x86_64*
rpm -Uvh sysstat-9*x86_64*
rpm -Uvh compat-libstdc++-33*i686*
rpm -Uvh compat-libcap*


List of supported Oracle 12C RPM packages:

yum install binutils -y
yum install compat-libcap1 -y
yum install compat-libstdc++-33 -y
yum install compat-libstdc++-33.i686 -y
yum install gcc -y
yum install gcc-c++ -y
yum install glibc -y
yum install glibc.i686 -y
yum install glibc-devel -y
yum install glibc-devel.i686 -y
yum install ksh -y
yum install libgcc -y
yum install libgcc.i686 -y
yum install libstdc++ -y
yum install libstdc++.i686 -y
yum install libstdc++-devel -y
yum install libstdc++-devel.i686 -y
yum install libaio -y
yum install libaio.i686 -y
yum install libaio-devel -y
yum install libaio-devel.i686 -y
yum install libXext -y
yum install libXext.i686 -y
yum install libXtst -y
yum install libXtst.i686 -y
yum install libX11 -y
yum install libX11.i686 -y
yum install libXau -y
yum install libXau.i686 -y
yum install libxcb -y
yum install libxcb.i686 -y
yum install libXi -y
yum install libXi.i686 -y
yum install make -y
yum install sysstat -y
yum install unixODBC -y
yum install unixODBC-devel -y


Read more ...

How to convert scn to a timestamp in Oracle

Oracle has inbuilt features using which you can convert scn to timestamp and timestamp to scn. SCN_TO_TIMESTAMP takes as an argument a number that evaluates to a system change number (SCN), and returns the approximate timestamp associated with that SCN. This function is useful any time you want to know the timestamp associated with an SCN. In order to do this, Oracle has provided two packages called SCN_TO_TIMESTAMP and TIMESTAMP_TO_SCN.

convert-scn-to-timestamp



  • SCN is very important particularly when you are doing RMAN Recovery. 
  • Archivelog gap resolution and Recovery of Physical standby server.
  •  It can be used in an data pump export parameter file using FLASHBACK_SCN to ensure a consistent copy of the database at that point-in-time. 


HOW TO CONVERT SCN TO TIMESTAMP AND TIMESTAMP INTO SCN



Check the current scn of the database using below query.
SQL>select current_scn from v$database;

current_scn
--------------
4426538972




To get the timestamp value from the current scn number.
SQL> select scn_to_timestamp(4426538972) as timestamp from dual;

timestamp
-------------------------------
23-SEP-18 03.22.42.000000000 PM




To get the scn number from the timestamp.
SQL> select timestamp_to_scn(to_timestamp('23/09/2018 15:22:44','DD/MM/YYYY HH24:MI:SS')) as scn from dual;

SCN
----------
4426538972

Read more ...

How to stop Materialized view Auto Refresh in Oracle

A materialized view is a table segment or database object that contains the results of a query. A materialized view created with the automatic refresh can not be alter to stop refreshing. In order to disable that you must break the dbms_job that was created in order to refresh the view.

stop_mview_auto_refresh

Mview are local copies of data located remotely, or are used to create summary tables based on aggregations of a table's data. Materialized views, which store data based on remote tables are also, know as snapshots.


Steps to Disable Automatic Refresh of Materialized View.



STEP 1. Connect as mview owner and execute the below query.
select * from user_jobs where broken ='N';



STEP 2. In the WHAT column for the mview refresh job you will see:
dbms_refresh.refresh('"[OWNER]"."[MVIEW_NAME]"');



STEP 3. Take the JOB_ID form the job column and execute below procedure and mview will stop refreshing automatically:
begin
dbms_job.broken(JOB_ID,TRUE);
commit;
end;
/


Example:
begin
dbms_job.broken(25,TRUE);
commit;
end;
/



If you want the mview to start refreshing again just run the job.
begin
dbms_job.run(JOB_ID);
commit;
end;
/


Example:
begin
dbms_job.broken(25,FALSE);
commit;
end;
/


That's all about disabling the materialized view. If you want to stop materialized view from auto refresh just run the above procedure and check the status of the job
Read more ...

How to Flush a Single SQL Statement from the Shared Pool

Sometimes it is required to flush the single sql plan from the shared pool and it a good idea rather than flushing all the sql plan from the memory. After baselining or fixing the bad plan, DBA's wanted to kick out the sql plan from the memory to confirm, if optimizer is picking the correct plan.

flush_sql_id_shared_pool

I have seen that many guys simply use to flush the shared pool using alter system flush shared_pool statement which is not a good idea. Rather if you are struggling for fixing the bad plan of any single query, it is advisable to flush the single sql_id out of the memory.


STEP 1: Find Address and  hash_value of particular sql_id.
SQL> select ADDRESS, HASH_VALUE from V$SQLAREA where SQL_ID='251fjyn5fj84q';

ADDRESS HASH_VALUE
---------------- ------------------------

00000005DGEC9DE0 257655674



STEP 2: Purge sql plan from the shared pool by passing the above values.
SQL> exec DBMS_SHARED_POOL.PURGE ('00000005DGEC9DE0, 257655674', 'C');

PL/SQL procedure successfully completed.


Here, ‘C’ (for cursor)

STEP 3: Check if the plan still exist in the memory. If no rows selected then plan has been flushed out from the memory for that sql_id.

SQL> select ADDRESS, HASH_VALUE from V$SQLAREA where SQL_ID='251fjyn5fj84q';

no rows selected


In the above example, we have used the V$ view to find cursor information. If you are using RAC environment then change your query to use gv$sqlarea just in case the SQL statement you are looking for was executed on an instance other than the one you are currently logged into.


So this is all about flushing the sql statement from the shared pool. If you are facing such perforamce issues then you can use the above method to flush plan for the particular sql_id from the memory.


Read more ...

Best ways to Describe Table in Vertica


There are various ways to describe table in vertica, among some of them are listed below.

describe table

List table definition using \d:
=>  \d table_name;
=> \d schema_name.table_name;

It will list all the below details:
Schema
Table
Column
Type
Size
Default
Not Null
Primary Key
Foreign Key

oraadmin=> \d P1_orahow.u_acct_trans;
List of Fields by Tables            

  Schema  |        Table |         Column         |     Type    
----------+----------------------+------------------------+---------

 P1_orahow | u_acct_trans| act_id                 | numeric(19,0)
 P1_orahow | u_acct_trans| sfl_id                 | int          
 P1_orahow | u_acct_trans| sts_id                 | int          
 P1_orahow | u_acct_trans| act_record_number      | int          
 P1_orahow | u_acct_trans| act_record_address     | numeric(19,0)
 P1_orahow | u_acct_trans| act_record_length      | int          
 P1_orahow | u_acct_trans| act_record_type        | varchar(255)
 P1_orahow | u_acct_trans| act_duplicate_hashcode | int          
 P1_orahow | u_acct_trans| act_dup_fl             | char(1)      
 P1_orahow | u_acct_trans| act_filename           | varchar(255)
 P1_orahow | u_acct_trans| act_cust_id            | varchar(255)
 P1_orahow | u_acct_trans| act_acct_id            | varchar(255)
 P1_orahow | u_acct_trans| act_subs_id            | numeric(19,0)
 P1_orahow | u_acct_trans| act_re_name            | varchar(255)
 P1_orahow | u_acct_trans| act_item_type_name     | varchar(255)
 P1_orahow | u_acct_trans| act_item_type_code     | varchar(255)
 P1_orahow | u_acct_trans| act_event_time         | timestamp    
 P1_orahow | u_acct_trans| act_charge             | numeric(19,0)
 P1_orahow | u_acct_trans| act_price_plan_code    | varchar(255)
 P1_orahow | u_acct_trans| act_price_plan_name    | varchar(255)
 P1_orahow | u_acct_trans| act_event_inst_id      | varchar(255)
 P1_orahow | u_acct_trans| act_msisdn             | varchar(255)



List high level Table definition:
oraadmin=> \dt u_acct_trans_log;

                          List of tables

    Schema     |         Name         | Kind  |  Owner   | Comment
---------------+----------------------+-------+----------+---------
 P1_orahow      | u_acct_trans | table | oraadmin  |
 P1_orahow_test | u_acct_trans | table | ora_test |

(2 rows)



Describe from v_catalog.columns definition:
SELECT *
FROM   v_catalog.columns
WHERE  table_schema='P1_orahow'
       AND table_name='u_acct_trans'
ORDER  BY ordinal_position;



List table definition using table export:
oraadmin=> SELECT EXPORT_TABLES('', 'schema_name.table_name');
oraadmin=> SELECT EXPORT_TABLES('', 'P1_orahow.u_acct_trans');



List table definition by table export, which will give you the full create statement, including projections:
oraadmin=> SELECT export_objects('', 'schema_name.table_name');
oraadmin=> SELECT export_objects('', 'P1_orahow.u_acct_trans');



List all tables in Public Schema:
oraadmin-> \dt public.*;
                        List of tables

 Schema |           Name           | Kind  |  Owner  | Comment

--------+--------------------------+-------+---------+---------

 public | abc                      | table | oraadmin |
 public | cdr_offpeak_ts_temp      | table | oraadmin |
 public | cdr_percall_ts_temp      | table | oraadmin |
 public | gprs_ts_temp             | table | oraadmin |
 public | ipdr_ts_temp             | table | oraadmin |
 public | nik_cdr_offpeak_summary  | table | oraadmin |
 public | nik_cdr_per_call_summary | table | oraadmin |
 public | nik_gprs_cdr_summary     | table | oraadmin |
 public | nik_ipdr_summary         | table | oraadmin |
 public | nik_recharge_log_summary | table | oraadmin |
 public | recharge_log_ts_temp     | table | oraadmin |

(11 rows)


Read more ...

Script to Monitor RMAN Backup Status and Timings

Being a DBA, you often asked to check the status of  RMAN backup job details. There are many ways to monitor the progress of the backup but you can use the below scripts to monitor the status of the RMAN job like full, incremental & archivelog backups.

You can use views like v$rman_backup_job_details and V$SESSION_LONGOPS to monitor the current executing RMAN jobs and the status of the previously completed backups.


RMAN STATUS

SQL> desc v$rman_backup_job_details
 Name                                 Null?        Type
 ----------------------------- -------- --------------------
 SESSION_KEY                              NUMBER
 SESSION_RECID                          NUMBER
 SESSION_STAMP                         NUMBER
 COMMAND_ID                             VARCHAR2(33)
 START_TIME                                DATE
 END_TIME                                    DATE
 INPUT_BYTES                             NUMBER
 OUTPUT_BYTES                         NUMBER
 STATUS_WEIGHT                       NUMBER
 OPTIMIZED_WEIGHT                  NUMBER
 OBJECT_TYPE_WEIGHT                    NUMBER
 OUTPUT_DEVICE_TYPE                     VARCHAR2(17)
 AUTOBACKUP_COUNT                       NUMBER
 BACKED_BY_OSB                          VARCHAR2(9)
 AUTOBACKUP_DONE                        VARCHAR2(9)
 STATUS                                 VARCHAR2(69)
 INPUT_TYPE                             VARCHAR2(39)
 OPTIMIZED                              VARCHAR2(9)
 ELAPSED_SECONDS                        NUMBER
 COMPRESSION_RATIO                      NUMBER
 INPUT_BYTES_PER_SEC                    NUMBER
 OUTPUT_BYTES_PER_SEC                   NUMBER
 INPUT_BYTES_DISPLAY                    VARCHAR2(4000)
 OUTPUT_BYTES_DISPLAY                   VARCHAR2(4000)
 INPUT_BYTES_PER_SEC_DISPLAY            VARCHAR2(4000)
 OUTPUT_BYTES_PER_SEC_DISPLAY           VARCHAR2(4000)
 TIME_TAKEN_DISPLAY                     VARCHAR2(4000)


This script will report status of current as well as completed backup details like full, incremental and archivelog backups:

col STATUS format a9
col hrs format 999.99
select
SESSION_KEY, INPUT_TYPE, STATUS,
to_char(START_TIME,'mm/dd/yy hh24:mi') start_time,
to_char(END_TIME,'mm/dd/yy hh24:mi')   end_time,
elapsed_seconds/3600                   hrs
from V$RMAN_BACKUP_JOB_DETAILS
order by session_key;


SESSION_KEY INPUT_TYPE    STATUS    START_TIME     END_TIME           HRS
----------- ------------- --------- -------------- -------------- -------
        585 ARCHIVELOG    COMPLETED 01/08/15 06:00 01/08/15 06:02     .03
        591 ARCHIVELOG    COMPLETED 01/08/15 12:00 01/08/15 12:01     .02
        596 ARCHIVELOG    COMPLETED 01/08/15 18:01 01/08/15 18:02     .03
        601 DB INCR       FAILED    01/08/15 20:00 01/09/15 01:47    5.79
        603 ARCHIVELOG    COMPLETED 01/09/15 06:00 01/09/15 06:07     .12
        608 ARCHIVELOG    COMPLETED 01/09/15 12:00 01/09/15 12:09     .16
        613 ARCHIVELOG    COMPLETED 01/09/15 15:07 01/09/15 15:25     .29




Below script will report you the percentage of completion along with sid and serial#.

SELECT SID, SERIAL#, CONTEXT, SOFAR, TOTALWORK,
ROUND (SOFAR/TOTALWORK*100, 2) "% COMPLETE"
FROM V$SESSION_LONGOPS
WHERE OPNAME LIKE 'RMAN%' AND OPNAME NOT LIKE '%aggregate%'
AND TOTALWORK! = 0 AND SOFAR <> TOTALWORK;


 SID       SERIAL#    CONTEXT    SOFAR      TOTALWORK  %COMPLETE
---------- ---------- ---------- ---------- ---------- ----------
 22        31         1          8225460    18357770   45.21

Read more ...

Best Way to move all objects from one Tablespace to another in Oracle

Being a DBA, there are many occasions where you need to move all database objects from one tablespace to another to make them reorganized. This is because too many tablespaces consume lot of space and are difficult to manage and cause extra overhead to oracle. So in this situations you need to move tables, indexes and other database objects to the newly created tablesapce.

Recently as a part of maintenance activity, we observed that users tablespace was consuming 363 GB disk space but when we checked from dba_segments, the actual size of the objects was 3GB only. We tried to resize the datafiles but during resizing datafiles we got the below error.

ERROR at line 1:
ORA-03297: file contains used data beyond requested RESIZE value


Oracle throw this error because datafiles has reached to the high watermark and some of the objects might be residing to the end of the hwm and you cannot shrink the datafiles below the High Water Mark (HWM).

For example, suppose you created one tablespace of size 120 GB and created two tables inside that tablespace of size 60 GB each. Now the overall size of the tablesapce is 120 GB and we can say that tablespace has reached to the high watermark because it doesn't contain the data beyound HWM(120GB).

What will happen if you dropped table1 and then trying to resize the datafile?

Inspite of 60 GB free space inside the tablespace, the database doesn't allow you to resize the datafile because tablespace has reached to its maxsize and still contains data(table2) which is sitting somewhere near to the HWM and it will throw error "file contains used data beyond requested RESIZE value". 


<===============60GB===============120GB>
                  table1                         table2


<=================|=================120GB>
                                                        table2


As we can see in users tablespace actual size of the data is 3.5GB only but it is occupying 363 GB of disk space. When we tried to resize the datafiles, it throws the ORA-03297 error because of HWM. To overcome this you must move all objects from users tablesapce to new tablesapce.



move objects to different tablesapce in oracle



 You can move all objects to different tablespace in many ways:  for example, using the alter table move command or the dbms_redefinition package. But the safest and the easy ways to do so is to use the remap_tablespace using expdp. If tablespace contains clustered objects then you cannot use alter table move command or any other scripts. So the only one option to do so is to use expdp.


Steps to Move objects into different tablespace using EXPDP:

STEP1: Create directory for export dumpfile:
SQL> create or replace directory test_dir as '/data/oracle';
Directory created.


STEP2: Grant read, write on the newly created directory.
SQL> grant read, write on directory test_dir to username;
Grant succeeded.


STES 3: Export all tablesapce objects using expdp.
nohup expdp \"/ as sysdba\" DIRECTORY=test_dir DUMPFILE=users.dmp LOGFILE=users.log TABLESPACES=USERS &


STEP 4: Import objects to the newly created tablespace using remap_tablespace.
Please note that, you must use table_exists_action=replace otherwise database willl throw error: object already exists and skipped because of default table_exists_action of skip.
nohup impdp \"/ as sysdba\" DIRECTORY=test_dir DUMPFILE=users.dmp table_exists_action=replace remap_tablespace=USERS:MTNGB1 LOGFILE=users.log &

Finally verify the objects in both the tablesapce and drop the tablespace which was consuming huge space.

Read more ...

How to Fix AHM not Advancing to Last Good Epoch in Vertica

Vertica advances the AHM at an interval of 5 minutes to be equal with Last Good Epoch - LGE. Because of unrefreshed projections due to some reason, the AHM does not advance. The AHM is never greater than the LGE. 

vertica


Definition and concepts you must know before troubleshooting AHM lagging issue:


Ancient History Mark -AHM:
The ancient history mark - AHM is the epoch prior to which historical data can be purged from physical storage.


Epoch:
An epoch is 64-bit number that represents a logical time stamp for the data in Vertica. Every row has an implicitly stored column that records the committed epoch.

The epoch advances when the data is committed with a DML operation (INSERT, UPDATE, MERGE, COPY, or DELETE). The EPOCHS system table contains the date and time of each closed epoch and the corresponding epoch number of the closed epoch.

Using below query, you can check which time periods pertain to which epochs:
oradmin=> SELECT * FROM epochs;



Current Epoch (CE):
The current epoch is the open epoch that becomes the last epoch (LE) after a COMMIT operation. The current_epoch at the time of the COMMIT is the epoch for that DML.
oradmin=> SELECT CURRENT_EPOCH FROM SYSTEM;

CURRENT_EPOCH
---------------
629415



Latest Epoch (LE):
The latest epoch is the most recently closed epoch. The current epoch after the COMMIT operation becomes the latest epoch.


Checkpoint Epoch (CPE):
The checkpoint epoch per projection is the latest epoch for which there is no data in the WOS. It is the point at which the projection can be recovered. The Tuple Mover moveout operation advances the projection CPE while moving the data from WOS to the ROS. You can see the projection checkpoint epochs in the PROJECTION_CHECKPOINT_EPOCHS system table.


Last Good Epoch (LGE):
The minimum checkpoint epoch across all the nodes is known as the last good epoch. The last good epoch refers to the most recent epoch that can be recovered in a manual recovery. The LGE consists of a snapshot of all the data on the disk. If the cluster shuts down abnormally, the data after the LGE is lost.
The Tuple Mover advances the CPE and sets a new LGE. If the Tuple Mover fails, the data does not move from the WOS to the ROS. Hence, the data does not advance the CPE and the LGE.


To see the cluster last good epoch, you can use the following command:
oradmin=> SELECT GET_LAST_GOOD_EPOCH();



Last Good Epoch Does Not Advance
There are certain situations when the last good epoch does not advance. If the LGE advances, you see the following result. When the Tuple Mover moves the data from the WOS to the ROS, the LGE advances:

oradmin=>SELECT CURRENT_EPOCH, LAST_GOOD_EPOCH FROM SYSTEM ;

 CURRENT_EPOCH   |   LAST_GOOD_EPOCH

---------------+-----------------

        731384   |       721381



If you do not see the LGE advance, check if there is data in the WOS:
oradmin=>SELECT sum(wos_used_bytes) from projection_storage ;


If there is data in the WOS, force a moveout operation:
oradmin=> SELECT do_tm_task('moveout');


Ancient History Mark Does Not Advance
If the difference b/w the Last good epoch(LGE) and Ancient History mark(AHM) is huge then you need to ensure that there is not much difference as it takes some hours to recover the data.


You can check the LGE and AHM difference using below query:
oradmin=> select get_current_epoch(),get_last_good_epoch(),get_ahm_epoch(),(get_current_epoch()- get_last_good_epoch()) LGECEDiff,(get_last_good_epoch()-get_ahm_epoch()) LGEAHMDiff, get_ahm_time();


oradmin=> select get_current_epoch() CE,get_last_good_epoch() LGE,get_ahm_epoch () AHM,(get_current_epoch()- get_last_good_epoch()) CeLGDiff,(get_last_good_epoch()-get_ahm_epoch())LgeAHmDiff,get_expected_recovery_epoch();



INFO 4544:  Recovery Epoch Computation:

Node Dependencies:
011 - cnt: 9448
101 - cnt: 9448
110 - cnt: 9448
111 - cnt: 5
Nodes certainly in the cluster:

        Node 1(v_usagedb_node0002), epoch 1164674

        Node 2(v_usagedb_node0003), epoch 1164674

Filling more nodes to satisfy node dependencies:

Data dependencies fulfilled, remaining nodes LGEs don't matter:

        Node 0(v_usagedb_node0001), epoch 1164669

   CE    |   LGE   |  AHM   | CeLGDiff | LgeAHmDiff | get_expected_recovery_epoch

---------+---------+--------+----------+------------+-----------------------------

 1164675 | 1164669 | 797307 |        6 |     367362 |                     1164674



To sync AHM with LGE, execute the below command:
oradmin=> SELECT MAKE_AHM_NOW();

The above command performs the below operations:
  • Advances the epoch
  • Performs a moveout operation on all projections.
Read more ...

How to Check the Size of Tables in Vertica

Vertica store table data in compressed format. To get the size of a table in vertica, you can use the below query. By using column_storage and projection_storage system tables you will get the size of table compressed format. You can check the all the column definition from the official vertica sites using below link.

v_monitor.projection_storage


COLUMN DEFINITION:
----------------------------------


ANCHOR_TABLE_NAME: VARCHAR
The associated table name for which information is listed.


ANCHOR_TABLE_SCHEMA: VARCHAR
The associated table schema for which information is listed.

USED_BYTES: INTEGER
The number of bytes of disk storage used by the projection.



SELECT anchor_table_schema,
anchor_table_name,
SUM(used_bytes) / (1024/1024/1024/1024) AS TABLE_SIZE_GB
FROM   v_monitor.projection_storage
GROUP  BY anchor_table_schema,
anchor_table_name
order  by sum(used_bytes) desc;


To find number of rows and bytes occupied by each table in the database
--------------------------------------------------------------------------------------------
SELECT t.table_name AS table_name,
SUM(ps.wos_row_count + ps.ros_row_count) AS row_count,
SUM(ps.wos_used_bytes + ps.ros_used_bytes) AS byte_count
FROM tables t
JOIN projections p ON t.table_id = p.anchor_table_id
JOIN projection_storage ps on p.projection_name = ps.projection_name
WHERE (ps.wos_used_bytes + ps.ros_used_bytes) > 500000
--and t.table_name='table_name'
GROUP BY t.table_name
ORDER BY byte_count DESC;


To find the size of single table in the database:
-----------------------------------------------------------
SELECT ANCHOR_TABLE_NAME,PROJECTION_SCHEMA,((SUM(USED_BYTES))/1024/1024/1024)  AS TOTAL_SIZE FROM PROJECTION_STORAGE WHERE ANCHOR_TABLE_NAME ='TABLE_NAME' AND ANCHOR_TABLE_SCHEMA='TABLE_SCHEMA' AND PROJECTION_NAME like '&PROJECTION_NAME' GROUP BY PROJECTION_SCHEMA, ANCHOR_TABLE_NAME;



SELECT anchor_table_schema,
anchor_table_name,
SUM(used_bytes) / ( 1024/1024/1024 ) AS TABLE_SIZE_GB
FROM   v_monitor.column_storage
GROUP  BY anchor_table_schema,
anchor_table_name
order  by sum(used_bytes) desc;

Read more ...

How to Change MySQL Data Directory to a New Location on Linux

In this article we will explain how to change the default MySQL data directory to a different mount point or location on a CentOS/RHEL 7 server.

 Sometimes there is a requirement to change the default mysql data directory (/var/lib/mysql) to a new location based on the expected use of the database server. The default /var location might not be feasible to hold the incoming data and over some period of time databases can run into I/O contention or space crunch. 

Thus it is a good practice to relocate default MySQL’s data directory to the new location.

In this example, we are moving the data directory to the mount point /DBdata. You can also create and use a separate directory for this purpose.

MySQL


Check the mount point free space and create new data directory
In this example we are considering /DBdata as new directory location.

[orahow@orahowdb ~]$ df -h
Filesystem                      Size  Used Avail Use% Mounted on
/dev/mapper/cl_clunode3-DBdata  500G   33M  500G   1% /DBdata
# mkdir directory_name
# chown -R mysql:mysql directory_name


STEP 1:  Check the current MySQL Data Directory Location
To identify the current data directory, login into the mysql server and fire the below command.
[orahow@orahowdb ~]$ mysql -u root -p
Enter password:


mysql> select @@datadir;
+-----------------+
| @@datadir       |
+-----------------+
| /var/lib/mysql/ |
+-----------------+
1 row in set (0.00 sec)


You can also fire the below command to get the current directory location.



STEP 2: Check the Current Status of MySQL Server:
[orahow@orahowdb ~]$ service mysql status
Redirecting to /bin/systemctl status mysql.service
● mysqld.service - MySQL Community Server
   Loaded: loaded (/usr/lib/systemd/system/mysqld.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2018-05-23 19:33:55 IST; 764ms ago
  Process: 13660 ExecStartPost=/usr/bin/mysql-systemd-start post (code=exited, status=0/SUCCESS)
  Process: 13644 ExecStartPre=/usr/bin/mysql-systemd-start pre (code=exited, status=0/SUCCESS)
 Main PID: 13659 (mysqld_safe)
    Tasks: 12
   CGroup: /system.slice/mysqld.service
           ├─13659 /bin/sh /usr/bin/mysqld_safe --basedir=/usr
           └─13825 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --log-error=/var/log/mysqld.log --pid-file=/var/run/...


STEP 3: To ensure data integrity, shut down the MySQL server before making changes to the data directory.
systemctl doesn't display the outcome of all service management commands, so if you want to be sure you've succeeded, use the following command:
[orahow@orahowdb ~]$ sudo systemctl stop mysqld
[orahow@orahowdb ~]$ service mysql status
[orahow@orahowdb ~]$ sudo systemctl status mysqld
.......
.......
May 23 19:34:11 orahow.oradb systemd[1]: Stopping MySQL Community Server...
May 23 19:35:34 orahow.oradb systemd[1]: Stopped MySQL Community Server.



STEP 4: Copy and Synchronize Default Directory (/var/lib/mysql) to New Location.
To copy existing database directory to new location, we will use sync (Remote Sync) command which is the most commonly used command for copying and synchronizing files and directories remotely as well as locally in Linux/Unix systems.
[orahow@orahowdb ~]$ cd /var/lib/mysql
[orahow@orahowdb mysql]$ ls -lrt
total 116804
drwx------. 2 mysql mysql     4096 May 17 10:35 mysql
drwx------. 2 mysql mysql     4096 May 17 10:35 performance_schema
-rw-rw----. 1 mysql mysql       52 May 17 10:35 aria_log_control
-rw-rw----. 1 mysql mysql    16384 May 17 10:35 aria_log.00000001
-rw-r--r--  1 mysql mysql      276 May 23 15:33 RPM_UPGRADE_MARKER-LAST
-rw-r--r--  1 root  root       276 May 23 15:33 RPM_UPGRADE_HISTORY
-rw-rw----  1 mysql mysql 50331648 May 23 15:54 ib_logfile1
-rw-rw----  1 mysql mysql       56 May 23 15:54 auto.cnf
srwxrwxrwx  1 mysql mysql        0 May 23 16:01 mysql.sock
-rw-rw----. 1 mysql mysql 18874368 May 23 16:01 ibdata1
-rw-rw----  1 mysql mysql 50331648 May 23 16:01 ib_logfile0
-rw-r-----  1 mysql mysql        0 May 23 16:13 binlog.index


We have already shut down the MySQL server, now we will copy the existing database directory to the new location with rsync command. For this we will use -a flag which will preserves the permissions and other directory properties, while -v provides verbose output so that you can follow the progress.

Note: Please don't use trailing slash on the directory, otherwise rsync can dump the contents of the directory into the mount point instead of transferring into a containing mysql directory.

[orahow@orahowdb DBdata]$ sudo rsync -av /var/lib/mysql /DBdata
sending incremental file list
mysql/
mysql/RPM_UPGRADE_HISTORY
mysql/RPM_UPGRADE_MARKER-LAST
mysql/aria_log.00000001
mysql/aria_log_control
mysql/auto.cnf
mysql/performance_schema/threads.frm

sent 120,636,427 bytes  received 1,962 bytes  80,425,592.67 bytes/sec
total size is 120,600,480  speedup is 1.00


Once the synchronization is complete, rename the current folder with a .bak extension and keep it until we’ve confirmed the move was successful.
[orahow@orahowdb ~]$ sudo mv /var/lib/mysql /var/lib/mysql_23rd_may_2018.bak


STEP 5: Configure and Point the Existing Data Directory to the New Location
By default, the datadir is set to /var/lib/mysql in the /etc/my.cnf file. Edit this configuration file to change the new data directory:
We have marked the necessary changes below. Please make an entry of client block[client], if it doesn't exist and point socket file to the new location.

[orahow@orahowdb ~]$ sudo vi /etc/my.cnf
# For advice on how to change settings please see
# http://dev.mysql.com/doc/refman/5.6/en/server-configuration-defaults.html

[mysqld]
#
# Remove leading # and set to the amount of RAM for the most important data
# cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%.
# innodb_buffer_pool_size = 128M
#
# Remove leading # to turn on a very important data integrity option: logging
# changes to the binary log between backups.
# log_bin
#
# Remove leading # to set options mainly useful for reporting servers.
# The server defaults are faster for transactions and fast SELECTs.
# Adjust sizes as needed, experiment to find the optimal values.
# join_buffer_size = 128M
# sort_buffer_size = 2M
# read_rnd_buffer_size = 2M
#datadir=/var/lib/mysql
datadir=/DBdata/mysql
socket=/DBdata/mysql/mysql.sock
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
# Recommended in standard MySQL setup
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES

[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
[client]
port=3306
socket=/DBdata/mysql/mysql.sock
Once done, save the configuration file using  escape :wq! and exit.


STEP 6 — Finally Restart the MySQL server 
We have updated the configuration file to use new location, now we are ready to start the MySQL server and verify the new directory location.
[orahow@orahowdb ~]$ sudo systemctl start mysqld
[orahow@orahowdb ~]$ sudo systemctl status mysqld


To make sure that the new data directory is indeed in use, start the MySQL monitor. Look at the value for the data directory again:

[orahow@orahowdb ~]$ mysql -u root -p
Enter password:
mysql> select @@datadir;
Output
+----------------------------+
| @@datadir                   |
+----------------------------+
| /DBdata/mysql/             |
+----------------------------+
1 row in set (0.01 sec)

Now that you’ve restarted MySQL and confirmed that it’s using the new location, take the opportunity to ensure that your database is fully functional. Once you’ve verified the integrity of any existing data, you can remove the backup data directory with sudo rm -Rf /var/lib/mysql.bak.

Conclusion:
In this tutorial, we’ve moved and changed the MySQL’s data directory to the new location. Although we have updated and restarted the services, sometimes their might be a port conflict. So check if MySQL port is already in use or not. If port is already in use then check the process who is listening the port, kill that process and restart the MySQL server again.
Read more ...

Hidden SQL- Why v$sql is not displaying sql_fulltext in Oracle

Recently i came across a situation where i was not able to see the SQL text for my sql_id. Developer and application guys were struggling to fix the backlog piling into the system during data loading. They had configured a piece of code called dbwritter - not a database dbwritter. This process will pick the file from app server and load it into the temporary table and finally exchange partition with the main table.

For faster access, it was caching temporary table into the keep cache before exchanging partition with the main table but at some place code was getting stuck and we were struggling to find the sql text which was blocked and every time code was getting stuck there, it was completely not moving at all.


We were killing and restarting the whole process again and again but every time code was getting stuck at the same place and it was displaying the sql_id but it was not displaying the sql_fulltext.

We tried many views like v$sql, v$session, v$sqlarea, dba_hist_sqltext but no luck.

It was displaying wait event "enq: TX - contention" but  not showing any sql_text.


   SID Serial#                 A W   Sec in Wait Event                     SQL                       SQL_ID
------- ------- ------------------ - - ---------- ------------------------- ------------------------- -------------
   1637   28617             Y Y     159205 enq: TX - contention   - Not Available -  5dxybryysj4g7
                                               

Finally after struggling through the codes, I tried to fetch all the sessions and sql text which was active. From the active session output, we observed that one of the insert statement was using hint and due to which it was blocking the sessions.

We fetch the sql_fulltext from the v$open_cursor which earlier it was not showing from the v$session. This sql_id was putting temporary table into the keep buffer cache.

Finally we killed the session which was using hint for the insert statement and after that everything was moving fine.

SQL> select * from v$open_cursor where sql_id like '5dxybryysj4g7';
alter TABLE XXXX storage (buffer_pool keep);

Read more ...

How insert statement works internally in oracle

In this post, we will see the flow and sequence of steps which oracle follow internally for the execution of insert statement. So below are the flowchart and execution steps which works in oracle architecture.





How does the insert query execution occur ?


  1. SQL*PLUS checks the syntax on client side.

  2. If syntax is correct the query is stamped as a valid sql statement and encrypted into oci (oracle call interface) packets andsent via lan using tcp to the server.

  3. Once the packets reach the server the server process will rebuild the query and again perform a syntax check on server side.

  4. Then if syntax is correct server process will continue execution of the query.

  5. The server process will go to the library cache. The library cache keeps the recently executed sql statements along with their execution plan.

  6. In the library cache the server process will search from the mru (most recently used) end to the lru (least recently used) end for a match for the sql statement. It does this by using a hash algorithm that returns a hash value. If the hash value of the query we have written matches with that of the query in library cache then server process need not generate an execution plan (soft parsing) but if no match is found then server process has to proceed with the generation of execution plan (hard parsing).

  7. Parsing is the process undertaken by oracle to generate an execution plan.

  8. The first step in parsing involves performing a symantic check. This is nothing but check for the existence of the obj and its structure in the database.

  9. This check is done by server process in the data dictionary cache. Here server process will ask for the definition of the object, if already available within the data dictionary cache, server process will process the check. If not available then server process will retrieve the required information from the system tablespace.

  10. After this in case of hard parsing the server process will approach the optimizer, who will read the sql statement and generate the execution plan of the query. The optimizer generates multiple execution plans during parsing.

  11. After generation of the e-plan's by the optimizer the server process will pick the best possible and cost effective e-plan and go to the library cache.

  12. In the library cache the server process will keep the e-plan in the library cache along with the original sql text.

  13. At this point in time the parsing ends and the execution of the sql statement will begin.

  14. After generation of e-plan server process will keep the plan in the library cache on the mru end.

  15. Thereafter the plan is picked up and execution of the insert operation will begin.

  16. Server process will bring empty blocks from the specific datafile of the tablespace in which the table will exist , into which rows must be inserted.

  17. The blocks will be brought into database block buffers(or database buffer cache).

  18. The blocks will be containing no data.

  19. Then server process will bring equal no of empty blocks from the rollback/undo tablespace. they will also be brought into the database block buffers.

  20. Server process will copy the address of the original data blocks of the userdata datafiles into the empty rollback/undo blocks.

  21. Then server process will bring a set of userdata blocks into the pga and the data will be added from the insert sql statement into user data blocks.

  22. After the insert operation is complete in the database buffer cache then dbwriter will write the data back to the respective datafiles after a certain time gap.
Read more ...

How update statement works internally in oracle

In this post, we will see the flow and sequence of steps which oracle follow internally for the execution of update statement.





How does the update query execution occur?


  1. SQL*PLUS checks the syntax on client side.

  2. If syntax is correct the query is stamped as a valid sql statement and encrypted into oci (oracle call interface) packets andsent via lan using tcp to the server.

  3. Once the packets reach the server the server process will rebuild the query and again perform a syntax check on server side.

  4. Then if syntax is correct server process will continue execution of the query.

  5. The server process will go to the library cache. The library cache keeps the recently executed sql statements along with their execution plan.

  6. In the library cache the server process will search from the mru (most recently used) end to the lru (least recently used) end for a match for the sql statement. It does this by using a hash algorithm that returns a hash value. If the hash value of the query we have written matches with that of the query in library cache then server process need not generate an execution plan (soft parsing) but if no match is found then server process has to proceed with the generation of execution plan (hard parsing).

  7. Parsing is the process undertaken by oracle to generate an execution plan.

  8. The first step in parsing involves performing a symantic check. This is nothing but check for the existence of the obj and its structure in the database.

  9. This check is done by server process in the data dictionary cache. Here server process will ask for the definition of the object, if already available within the data dictionary cache, server process will process the check. If not available then server process will retrieve the required information from the system tablespace.

  10. After this in case of hard parsing the server process will approach the optimizer, who will read the sql statement and generate the execution plan of the query. The optimizer generates multiple execution plans during parsing.

  11. After generation of the e-plan's by the optimizer the server process will pick the best possible and cost effective e-plan and go to the library cache.
  12. In the library cache the server process will keep the e-plan in the library cache along with the original sql text.

  13. At this point in time the parsing ends and the execution of the sql statement will begin.

  14. After generation of e-plan server process will keep the plan in the library cache on the mru end.

  15. Thereafter the plan is picked up by the server process and execution of the update will begin.

  16. Server process will bring the required blocks from the specific datafile of the table which has to be updated.

  17. The blocks will be brought into database block buffers(or database buffer cache).

  18. The blocks will be containing the original data of the table.

  19. Then server process will bring equal no of empty blocks from the undo tablespace and they will also be brought into the database block buffers(or database buffer cache).

  20. Server process will copy the original data from the userdata blocks into the empty rollback/undo blocks and create a before image.

  21. Then server process will bring a set of userdata blocks into the pga (program global area) and after performing filter operations the selected rows will be updated with new content.

  22. The above update process will continue until all the userdata blocks have been checked and updated.

  23. After the update operation is complete then dbwriter will write the data back to the respective datafiles after a certain time gap.


Read more ...

Network Wait: SQL*Net more data from client in awr report

In one of the sites, I came across a performance issue where data loading task was taking more than usual. After analysing the AWR report, we observed that network wait was high and the task was waiting for the event sql*net more data from client.

As you can see, in below AWR there are two consecutive days screenshot, and in both the AWR there was backlogs. But in second AWR where average wait in ms is 158, processing speed was little bit faster than first one where  average wait in ms was 293.







As observed in AWR, network  wait is because of the the shadow process has received part of a call from the client process (for example, SQL*Plus, Pro*C, and JDBC) in the first network package and is waiting for more data for the call to be complete. Examples are large SQL or PL/SQL block and insert statements with large amounts of data.

The possible cause might be network latency problems, tcp_no_delay configuration issues and large array insert.
Read more ...

How select statement works internally in oracle

In this post we will discuss about the order or flow of execution of select statement in oracle.

To write a select query against an oracle database we require an oracle client installation on the client system. Oracle client is nothing but oracle sql*plus. whenever we are giving the username, password and host string to sql*plus client then it takes the host string and will lookup a file known as tnsnames.ora (transparent network substrait). 

This file will be located in $ORACLE_HOME\network\admin\tnsnames.ora. 
The oracle client is also installed when we install developer forms or jdeveloper. The tns file will keep the host string or alias and that will point to a config script. The script will keep ip address of oracle server,port number of the listener and sid of the database. Using these details sql*plus will dispatch the given username and password to the above given address. The database will authenticate the user and if successful then a server process will be initiated on the server side and user process will be initiated on the client side. After this a valid session is establish between the client and the server. The user types a query on the sql prompt.






Below are the select query execution flow in oracle:


  1.  SQL*PLUS checks the syntax on client side.

  2.  If syntax is correct the query is stamped as a valid sql statement and encrypted into oci (oracle call interface) packets and sent via lan using tcp to the server.

  3.  Once the packets reach the server the server process will rebuild the query and again perform a syntax check on server side.

  4. Then if syntax is correct server process will continue execution of the query.

  5. The server process will go to the library cache. The library cache will keep the recently executed sql statements along with their execution plan.

  6.  In the library cache the server process will search from the mru (most recently used) end to the lru (least recently used) end for a match for the sql statement. It does this by using a hash algorithm that returns a hash value. If the hash value of the query we have written matches with that of the query in library cache then server process need not generate an execution plan (soft parsing) but if no match is found then server process has to proceed with the generation of execution plan (hard parsing).

  7.  Parsing is the process undertaken by oracle to generate an execution plan.

  8.  The first step in parsing involves performing a semantic check. This is nothing but check for the existence of the object and its structure in the database.

  9.  This check is done by server process in the data dictionary cache. Here server process will ask for the definition of the object, if already available within the data dictionary cache, server process will process the check. If not available then server process will retrieve the required information from the system tablespace.

  10. After this in case of hard parsing the server process will approach the optimizer, who will read the sql statement and generate the execution plan of the query. the optimizer generates multiple execution plans during parsing.

  11. After generation of the e-plan's by the optimizer the sp will pick the best possible and cost effective e-plan and go to the library cache.

  12. In the library cache the server process will keep the e-plan in the library cache along with the original sql text.

  13. At this point in time the parsing ends and the execution of the sql sataement will begin.

  14. Server Process will then go to the database buffer cache and checks whether the data required by the query is already available or not in the cache.

  15. If available that data can be returned to the client else it brings the data from the database files.

  16. If sorting and filtering is required  by the query then the pga is utilized along with the temporary tablespace for performing sort run.

  17. After sort run the data is returned to the client and sql*plus client will convert the given data to ascii format and display the data in a tabular format to the users.


Read more ...

How to Enable/Disable ARCHIVELOG Mode in Oracle 11g/12c

When you run the database in NOARCHIVELOG mode, you disable the archiving of the redo log. If you want to take the backup of the database using RMAN then your database must be in ARCHIVELOG mode.

A database backup, together with online and archived redo log files, guarantees that you can recover all committed transactions in the event of OS or disk faluer.


Below are the steps required to enable archive log mode on an Oracle 10g/11g or 12c database.


Verify the database log mode.

[oracle@orahow ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Sat Feb 03 04:05:02 2015

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options


SQL> archive log list
Database log mode              No Archive Mode
Automatic archival             Disabled
Archive destination            USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     25
Current log sequence           27


From the above output you can see that your database is in No Archive Mode. Note that Archive destination is USE_DB_RECOVERY_FILE_DEST. You can determine the path by looking at the parameter RECOVERY_FILE_DEST.


SQL> show parameter recovery_file_dest

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest                string      /u01/app/oracle/flash_recovery
                                                 _area
db_recovery_file_dest_size           big integer 2728M


By default, archive logs will be written to the flash recovery area also called FRA. If you don't want to write archive logs to the FRA then you can set the parameter LOG_ARCHIVE_DEST_n to the new location where you wish to write the archive logs.


To set new new archive log destination, you can use the following command.

SQL> alter system set log_archive_dest_1='LOCATION=/u01/app/oracle/oradata/orahow/arch' scope = both;

System altered.

SQL> archive log list;
Database log mode              No Archive Mode
Automatic archival             Disabled
Archive destination            /u01/app/oracle/oradata/orahow/arch
Oldest online log sequence     26
Current log sequence           28
SQL>


Now we shutdown the database and bring it backup in mount mode.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup mount
ORACLE instance started.

Total System Global Area  606806016 bytes
Fixed Size                  1376268 bytes
Variable Size             394268660 bytes
Database Buffers          205520896 bytes
Redo Buffers                5640192 bytes
Database mounted.
SQL>

Now set the database in archive log mode

SQL> alter database archivelog;

Database altered.

Finally open the database.
SQL> alter database open;

Database altered.

SQL> archive log list
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            /u01/app/oracle/oradata/orahow/arch
Oldest online log sequence     26
Next log sequence to archive   28
Current log sequence           28


From the above output you can see that database is in archive log mode and automatic archival is also enabled.

For experiment or confirmation you can switch the log file to see that an archive is written to archive log location.


SQL> alter system switch logfile;

System altered.

SQL> host
[oracle@orahow ~]$ ls /u01/app/oracle/oradata/orahow/arch
1_28_812359664.dbf
[oracle@orahow ~]$ exit
exit


Disabling Archive Log Mode


The following are the steps required to disable archive log mode on an Oracle 10g/11g or 12c database.

Verify the database log mode.

[oracle@orahow ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Sat Feb 03 04:05:02 2015

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options


SQL> archive log list
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            /u01/app/oracle/oradata/orahow/arch
Oldest online log sequence     26
Next log sequence to archive   28
Current log sequence           28


SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.


SQL> startup mount
ORACLE instance started.

Total System Global Area  606806016 bytes
Fixed Size                  1376268 bytes
Variable Size             394268660 bytes
Database Buffers          205520896 bytes
Redo Buffers                5640192 bytes
Database mounted.
SQL>


To disable archive log mode use the below command.

SQL> alter database noarchivelog;

Database altered.

SQL> alter database open;

Database altered.

SQL> archive log list;
Database log mode              No Archive Mode
Automatic archival             Disabled
Archive destination            /u01/app/oracle/oradata/orahow/arch
Oldest online log sequence     26
Current log sequence           28
SQL>
As you can see, ARCHIVELOG mode has been disabled.

Read more ...

CONTACT

Name

Email *

Message *