Monday, September 28, 2020

Managing Tablespaces and Data Files

 Tablespace and Data Files : 

Oracle Stores Data logically in tablespaces and Physically in Data files.

  • Tablespaces :

- Can belong to only one databses at a time.

- Consist of one or more data files

- Are further devided into logical units of stoarge.

  • Data Files :

- Can belong to only one tablespace and one databases

- Are a repository for Schema object data.


Databases, Tablespaces, and Data files are closely related, but they have important differences :

  • An Oracle Database consists of one or more logical storage units called tablespaces,  which collectively store all of the database's data.
  • Each tablespace in an Oracle Database consists of one or more files called data files, which are physical structures that conform with the operating system in which Oracle is running.
  • A database's data is collectively stored in the data files that constitute each tablespace of the database. 
For example, the simplest Oracle Database would have one tablespace and one data file. Another database can have more tablespaces with more data files.


Types of tablespaces :

  • SYSTEM  Tablespace

- Created with the database

- Required in all databases  

- Contains the data Dictionary, Including stored program units

- Contains the SYSTEM undo segments

- Should not contain user data, although it is allowed. 

  • Non-SYSTEM tablespace

- Separate Segments

- Separate undo temporary, application data, and application index segments 

- Separate Data by backup requirements

- Separate dynamic and static data 

- Eases Space adminstration

- Controls amount of space allocation to a user

The DBA creates tablespace for increased control and ease of maintenance.  


Creating Tablespaces:

A tablespace is created using the command:  CREATE TABLESPACE:

Example:

CREATE TABLEPSACE userdata  DATAFILE 

'/u01/oradata/userdata01.dbf' size 100m;

Tablespace Creation syntax with more option Details :

Syntax: 

CREATE TABLESPACE tablespace name

[DATAFILE  clause] [MINIMUM EXTENT integer [K|M]]

[BLOCKSIZE integer [K]] [LOGGING | NOLOGGING]

[DEFAULT storage_clause] [ONLINE | OFFLINE ]

[PERMANENT | TEMPORARY] 

[extent_management_clause] [ segment_management_clause]

Where :

TABLESPACE NAME: This is the name of the tablespace to be created.

DATAFILE: This specifies the data file Or Data files name that uses to make up tablespace.

MINIMUM EXTENT: This ensures that every used extent size in the tablespace is a multiple of the integer. Use K or M to specify this size in Kilobytes or Megabytes.

BLOCKSIZE: Specifies a nonstandard block size for the tablespace. In order to specify this clause, you must have the DB_CACHE_SIZE and at least one DB_nK_CACHE_SIZE parameter set and the integer you specify in this clause must correspond with the setting of one DB_nK_CACHE_SIZE parameter setting.

LOGGING: This specifies that, by default, all tables, indexes, and partition within the tablespace have all changes written to online redo log files. LOGGING is the by default.

NOLOGGING: This specifies that, by default, all tables, indexes, and partition within the tablespace do not have all changes written to the online redo log file. NOLOGGING affects only some DML and DDL commands, For Example, Direct loads.

DEFAULT: DEFAULT clause specifies the default storage parameter for all objects created in the tablespace creation. 

OFFLINE: This clause makes the tablespace unavailable immediately after creation.

PERMANENT: This specifies that the tablespace can be used to hold permanent objects.

TEMPORARY: This specifies that the tablespace can be used to hold temporary objects. For example, segments used by implicit sorts caused by an ORDER BY clause. It cannot specify EXTENT MANAGEMENT LOCAL or the BLOCKSIZE clause.

extent_management_clause: This clause specifies how the extents of the tablespace are managed.

segment_management_caluse: This is relevant only for permanent, Locally managed tablespaces. It lets you specify whether Oracle should track the used and free space in the segments in the tablespace using free lists or bitmaps.

  Review below syntax:

datafile_clause:= file_name [SIZE integer (K|M)] [REUSE] [autoextend_caluse] [NEXT]

Filename : this is the name of a data file in the tablespace.

SIZE : This specifies the size if the file. Use K or M to specify the size in Kilobytes or Megabytes 

REUSE: This allows the Oracle server to reuse and existing file.

autoextend_clause : This Clause enables or disables automatic extention of the data file.

NEXT: This Specifies the size in bytes of the next increment of disk space to be allocated automatically when more extents are required.

Where:

MAXSIZE : this specifies the maximum disk space allowed for automatic extention of the datafile.

UNLIMITED :  This specifies the disk space that can be allocated to the data file, or that the tempfile is not limited. 


Thank you !! 

Wednesday, September 23, 2020

Archived Redo log File

 Archived Redo Log File :

  • Filled online redo log files can be archived.
  • There are two advantages in running the database in ARCIHIVELOG mode and archiving online redo log files :

- Recovery: A Database backup togather with online and archived redo log files can guarantee recovery of all committed transaction.

- Backup: This can be performed while the database is open

  • By Default, The database is created in NOARCHIVELOG mode. 

One of the important decisions that a database administrator has to make is whether the database is configured to operate in ARCHIVELOG mode or in NOARCHIVELOG mode.


NOARCHIVELOG MODE :

In noarchivelog mode, the online redo log files are overwritten each time an online redo log file is filled, and log switches occur. LGWR does not overwrite an online redo log file group until the checkpoint for that group is completed.


ARCHIVELOG MODE :

If the database is configured to run in ARCHIVELOG mode, inactive groups of filled online redo log files must be archived. Because all changes made to the database are recorded in the online redo log files,  The Database administrator should take the physical backup of the archived online redo log file. Backup of the archived redo log file is used to recover the database without losing any committed data when Database recovery required.

There are two ways  in which online redo log files can be archived :

  • Manually
  • Automatically ( recommended method )
The LOG_ARCHIVE_START initialization parameter indicates whether archiving should be automatic or manual when the instance starts up.

  • TRUE: TRUE indicates that archiving is automatic. ARCn initiates the archiving of the filled log group at every log switch.
  • FALSE: The default value is FALSE, and it indicates that the DBA archives the filled online redo log files manually. The DBA must manually execute a command each time you want to archive an online redo log file. All or specific online redo log files can be archived manually.

Please review some important facts about Archive redo log files :

  • Archiving accomplished automatically by ARCn
  • Archiving accomplished manually through SQL statements
  • When the online redo log file successfully archived below event occurs :

- An entry in the control file is made.

- Records: archive log name, log sequemce number, and high and low system change number (SCN)

  • Filled online redo log files cannot be reused until: 

- A checkpoint has taken place

- File has been archived by ARCn

  • Can be multiplexed
  • Maintained by the DBA


Information about archive logs can be obtained from V$INSTANCE view.

SQL> SELECT archiver FROM V$INSTANCE;

ARCHIVE

-------

STARTED


The archiver parameter having a different status as below.

STOPPED: This status shows that the archiving has stopped working.

STARTED: This status shows that archiving is in progress and archiving is done by the Oracle Background Process ARCn.

FAILEDThis status shows that the archiver failed to archive a log last time but will try again within 5 minutes.


Thank you !!



Managing Online Redo Log Files With OMF.

Managing Online Redo Log File With OMF :

  • Define the DB_CREATE_ONLINE_LOG_DEST_n parameter :
DB_CREATE_ONLINE_LOG_DEST_1

DB_CREATE_ONLINE_LOG_DEST_2

  • A group can be added with no file specification.

ALTER DATABASE ADD LOGFILE;

  • Dropping a Group

ALTER DATABASE DROP LOGFILE GROUP 3;


Define the DB_CREATE_ONLINE_LOG_DEST_n  parameter: 

To create online redo log files to be managed by OMF, the DB_CREATE_ONLILNE_LOG_DEST_n  parameter must be defined. The parameter must be set for each multiplexed copy identified by the n value.

In the example above two groups have been created with 2 members each. The names will be generated automatically (such as  "ora_1_wo94n2xi.log") and displayed in the alertSID.log. The default size is 100 MB.


Adding Log File Group:

To create a new group of online redo log files, the DBA uses the ALTER DATABASE ADD LOGFILE command. The command has been modified so that the file specification is not necessary.

In the example above, To add a log file with 2 members, One in the location defined by the 

DB_CREATE_ONLINE_LOG_DEST_1  and one in 

DB _CREATE_ONLINE_LOG_DEST_2.

Unique filenames for the logfile members have generated automatically and displayed in the alertSID.log. the Default Size is 100MB.


Dropping a Group :

The above example drops the log file Group 3, and its operating system files associated with each OMF log file members in Group 3.


Archived Redo Log Files and OMF :

Archived redo log files cannot be OMF.


Obtaining Group and Member Information:

Information about a group and its members can be obtained by querying the following views. 

  • V$LOG
  • V$LOGFILE
V$LOG  view :

The following query returns information about the online redo logfile from the control file :

SQL> SELECT group#, sequence#, bytes, members, status FROM v$log;

    GROUP#  SEQUENCE#      BYTES    MEMBERS STATUS

-------- ---------- ---------- ---------- ----------------

       1     403637  209715200          2 CURRENT

       2     403634  209715200          2 INACTIVE

       3     403635  209715200          2 INACTIVE

       4     403636  209715200          2 INACTIVE

       5     403632  209715200          2 INACTIVE

       6     403633  209715200          2 INACTIVE

6 rows selected.


The following items are the most common values for the Status column : 

  • UNUSED: This status indicates that the online redo log file group has never been written to. This is the state of the online redo log file group that was just added.
  • CURRENT: This status indicates the current online redo log file group. This implies that the online redo log file group is active.
  • ACTIVE: This status indicates that the online redo log file group is active but it is not the current online redo log file group. It is needed for crash recovery. It may be in use for block recovery. It may or may not be archived.
  • CLEARING: This status indicates that the log is being re-created as an empty log after an ALTER DATABASE CLEAR LOGFILE command. After the log is cleared, the status changes the UNUSED.
  • CLEARING_CURRENT: This status indicates that the current log file is being cleared of a closed thread. The log can stay in this status id there is some failure in the switch, such as an input/output (I/O) error writing the new log header.
  • INACTIVE: This status indicates that the online redo log file group is no longer needed for instance recovery. It may or may not be archived.

V$LOGFILE  View :

To obtain the names of all the members of a group, query the V$LOGFILE view.

SQL> SELECT * FROM V$LOGFILE;

GROUP# STATUS  TYPE    MEMBER        IS_RECOVERY_DEST_FILE

------- ------- ------- ----------------- ---------------

2            ONLINE  /u01/oradata/ORCL/log02a.rdo       NO

2            ONLINE  /u01/oradata/ORCL/log02b.rdo       NO

1            ONLINE  /u01/oradata/ORCL/log01a.rdo       NO

1            ONLINE  /u01/oradata/ORCL/log01b.rdo       NO


4 rows selected.


 The value of the STATUS column could be one of the following.

  • INVALID: This status indicates that the file is inaccessible.
  • STALE: This status indicates that the contents of the file are incomplete.
  • DELETED: This status indicates that the file is no longer used.
  • Blank indicates that the file is in use. 

Thank You !!

Monday, September 14, 2020

Adding,Dropping, Clearing and Relocating or Renaming Online Redo Log Files and Redo log File Members

Adding Online Redo Log File Groups: 

 

Generally, we all use the below command to add Redo Log File Groups. Suppose If you have 2 redo log groups in your Environment and Due to the heavy transaction, Database performance is affecting, So in this scenario, you can add an online redo log file groups to run Your database smoothly and Ensure your Database availability and reduce the burden of the oracle process.

Command Used for Online Redo log file Group Creation :

SQL> ALTER DATABSE ADD LOGFILE GROUP 3

('$HOME/ORADATA/u01/log3a.log',

'$HOME/OARADATA/u02/log3b.log') SIZE 1M; 

In some cases, you might need to create additional log file groups. For example, Adding groups can solve availability problems. To create a new group of online redo log files, Use the above SQL command and use the path as per your requirement. 

Redo Log File Group creation syntax is as below:

SQL> ALTER DATABASE ADD LOGFILE [GROUP  integer]  filespec

[, [GROUP  integer]  Filespec ] ... SIZE ];

You Specify the Name and Location of the members with the file specification. The Value of the GROUP parameter can be selected for each online redo log filegroup. If you omit this parameter, the Oracle Server Generates its Value Automatically. 


Adding Online Redo Log File Members :

We use below SQL command to add Online Redo Log File Group Members. You can add new members to existing Online Redo Log File Groups using the following Command. 

SQL> ALTER DATABASE ADD LOGFILE MEMBER 

'$HOME/ORADATA/u04/log1c.log'  To  GROUP 1,

'$HOME/ORADATA/u04/log2c.log ' To  GROUP 2,

'$HOME/ORADATA/u04/log3c.log'  To  GROUP 3 ;

Use the Fully specified name of the log file members, Otherwise, the files are created in a default directory of the database server.

If the file already exists, it must have the same size, and you must specify the REUSE  option. You can Identify the target group either by specifying one or more members of the group or by specifying the group numbers.


Dropping Online Redo Log File Groups :

Generally, We use the Below command to Drop an online redo log file group :

SQL> ALTER DATABASE DROP LOGILE GROUP 3;

To increase or decrease the size of the online redo log file groups, Add a new online redo log file group (With the new size) and then drop the old one.

An entire online redo log file group can be dropped with the following ALTER DATABASE DROP LOGFILE  command. Syntax of Command is as below:

ALTER DATABSE DROP LOGFILE {GROUP  interger | ('filename',filename')};

 

Restrictions

  • An instance requires at least two groups of online redo log files.
  • An active or Current group Can not be dropped.
  • When an online redo log file Group is dropped, the operating system files are not deleted.


Dropping Online Redo Log File Members :

We use the below command to drop an online redo log file members:

SQL>ALTER DATABASE DROP LOGFILE MEMBER 

'$HOME/ORADATA/u04/log3c.log';

You may want to drop an online redo log file member because it is invalid. Use the following  ALTER DATABASE DROP LOGFILE MEMBER command if you want to drip one or more specific online redo log file members.

Command Syntax :

ALTER DATABASE DROP LOGFILE MEMBER 'filename' ;


Restrictions:

  • If the member you want to drop is the last valid member of the group, you can not drop that member.
  • If the group is current, you must force a log switch before you can drop the member.
  • If the Database is running in ARCHIVELOG mode and the log file group to which the member belongs is not archived, then the member can not be dropped.
  • When an online redo log file member is dropped, the operating system file is not deleted if you not using the OMF feature.

Relocating or Renaming the Online Redo Log Files :

Relocate or Rename online redo log files by one of the two following ways :

  • ALTER DATABASE RENAME FILE command

-  Shut Down the Database.

-  Copy the online redo log files to the new Location.

-  Place the database in MOUNT mode. 

-  Execute the Command .

SQL> ALTER DATABSE RENAME FILE 

'$HOME/ORADATA/u01/log2a.log' To '$HOME/ORADATA/u02/log1c.log' ;

- Open Databae for normal operation.

  • Add new members and drop old members.


The Location of Online redo log files can be changed by renaming the online redo log files. Before renaming the online redo log files, Ensure that the new online redo log file exists. The Oracle server changes only the pointers in the control files but does not physically rename or create any operating system files.


Steps for Relocating or Renaming Online Redo Log Members:

1. Shut down the database 

SQL>  SHUTDOWN IMMEDIATE

2. Copy the online redo log files to the new location.

3. Startup the database in mount mode, but do not open it.

 SQL>  CONNECT / AS SYSDBA

 SQL> STARTUP MOUNT

4. Rename the online redo log members using ALTER DATABASE RENAME FILE command.

SQL> ALTER DATABASE RENAME FILE 

'$HOME/ORADATA/u01/log2a.log' To '$HOME/ORADATA/u02/log1c.log' ;

 5. Open the Database for normal operation.

SQL> ALTER DATABASE OPEN;

 

Clearing Online Redo Log Files:

For Clearing the online redo log files by using the following ways :

  • ALTER DATABASE CLEAR LOGFILE command can be used to reinitialize an online redo log file.
SQL> ALTER DATABASE CLEAR LOGFILE GROUP 2 ;

  • Use the UNARCHIVED keyword to avoid archiving the corrupted online redo log file.

SQL> ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 2 ;

 

An online redo log file might become corrupted while the database is open, and ultimately stop database activity because archiving cannot continue. In this Situation, the ALTER DATABASE CLEAR LOGFILE command can be used to reinitialize the online redo log file without shutting down the database.

This command can overcome two situations where dropping online redo log files is not possible :
  • If there are only two log groups.
  • The corrupt online redo log file belongs to the current group
If the corrupt online redo log file has not been archived, use the UNARCHIVED keyword in the command to clear the corrupted online redo log files and avoids archiving them. The cleared online redo log files are available for use even though they were not archived.

If you clear a log file that is needed for recovery of a backup, then you can no longer recover from that backup. Oracle Writes a message in the alert log describing the backups from which you can not recover.  So we should have to perform a fresh backup of the Database if this kind of situation comes.

 
Online Redo Log File Configuration:

To determine the appropriate number of online redo log files for a database instance, you must test different configurations.

In some cases, a database instance may require only two groups. In another situation, a database instance may require additional groups to guarantee that the groups are always available to LGWR. 

For example: if the messages in the LGWR trace file or in the alert log file indicate the LGWR frequently must wait for a group because a checkpoint has not completed or a group has not been archived, you should add groups.


Location of Online Redo Log File:

When you multiplex the online redo log files, Place members of a group on different disks. By doing this, even if one member is not available but other members are available, the instance does not shut down.

Separate archive log files and online redo log files on different disks to reduce contention between the ARCn and LGWR background processes.

Datafiles and online redo log files should be placed on different disks to reduce LGWR and DBWn contention and reduce the risk of losing both data files and online redo log files in the event of media failure.     


Sizing Online Redo Log Files :

The minimum size of an online redo log file is 50 KB, and the maximum size is specific to the operating system. Members of the different groups can have different size, however, there is no benefit in having different-sized groups.

Different Sized groups should be required as a temporary result only if you want to change the size of the members of the online redo log file groups. In this case, you should create a new online redo log file  groups with different sizes, and then remove the old groups

The following situations might influence the configuration of the online redo log files: 

  • Number of  log switches and checkpoints
  • Number and amount of redo entries
  • Amount of space on the storage medium. for example on tape if archiving is enabled. 


Thank you !!    

Tuesday, September 8, 2020

Online Redo Log Files

Online Redo Log Files:

Online Redo log files have the following characteristics : 

  • Record all changes made to data
  • Provide a Recovery mechanism
  • Can be organized into groups
  • At least two groups required 

Online redo log files provide the way to redo transactions in the event of a database failure. Every transaction is written synchronously to the Redo Log Buffer, then gets flushed to the online redo log files in order to provide a recovery mechanism in case of media failure. (With Exceptions such as direct load inserts in objects with the NOLOGGING clause enabled.)

This includes transactions that have not yet been committed, Undo segment information, and schema and object management statements. Online redo log files are used in a situation such as an instance failure to recover committed data that has not been written to the data files. The online redo log files are used only for recovery.

 

Structure of the Online Redo Log Files:


                          Online Redo Log File Structure 

The DBA can set up the oracle database to maintain copies of online redo log files to avoid losing database information due to a single point of failure.

Online Redo Log File Groups :

  • A set of identical copies of online redo log files is called an online redo log file group.
  • The LGWR background process concurrently writes the same information to all online redo log files in a group.
  • The Oracle Server needs a minimum of two online redo log file groups for the normal operation of a  database.

Online Redo Log File Members :

  • Each online redo log file in a group is called a member.
  • Each member in a group has identical sequence numbers and are of the same size. The log sequence number is assigned each time that the Oracle Server writes to a log group to uniquely identify each online redo log file. 
  • The current log sequence number is stored in the control file and in the header of all the data files.  

Creating Initial Online Redo Log Files :

The Initial set of online redo log file groups and members are created during the database creation.

The following parameters limit the number of the online redo log file : 

  • The MAXLOGFILES parameter in the CREATE DATABASE command specifies the absolute maximum of online redo log file groups.
  • The maximum and the default value for MAXLOGFILES are dependent on your operating system.
  • The MAXLOGMEMBERS parameter used in the CREATE DATABASE command determines the maximum number of members per group. The Maximum and default value for MAXLOGMEMBERS is dependent on your operating system.


How Online Redo Log Files Work:

  • Online Redo Log  Files are used in a cyclic fashion.
  • When an online redo log file is full, LGWR (background Process ) will move to the next log group.

- Called a log Switch

- Checkpoint operations also occure

- Information Written to the control file


The Oracle Server sequentially records all changes made to the database in the Redo Log Buffer. The redo entries are written from the redo log buffer to the current online redo log file group by the LGWR process. LGWR writes under the following situations.

  • When a transaction commits
  • When the redo log buffers one-third full.
  • When there is more than a megabyte of changed records in the Redo Log Buffer.
  • Before the DBWn writes modified blocks in the database buffer cache to the Data Files.
Online redo log files are used in a cyclic fashion. Each Online redo log file group is identified by a log sequence number that is overwritten each time the log is reused.

Log Switches : 

LGWR writes to the online redo log files sequentially. When the current online redo log file group is filled, LGWR begins writing to the next group. This is called a log switch. 

When the last available online redo log file is filled. LGWR returns to the first online redo log file group and starts writing again.

Checkpoints :

During the checkpoint below activity happens and can say characteristics of Checkpoint : 

  • DBWn writes a number of dirty database buffers, which are covered by the log that is being checkpointed, to the data file.
  • The checkpoint backup ground process CKPT updates the control file to reflect that is it has completed a checkpoint successfully. If the checkpoint is caused by a log switch, CKPT also updates the headers of the data file.
Checkpoints can occur for all data files in the database or only for the specific data file.

In the following situation, checkpoints occur, for Example :

  • At every log switch
  • When an instance has been shut down with the normal, transactional, or Immediate option.
  • When forced by setting the FAST_START_MTTR_TRAGET initialization parameter.
  • When manually requested by the database administrator
  • When the ALTER TABLESPACE [OFFLINE NORMAL | READ ONLY | BEGIN BACKUP] command causes checkpoint on specific data files.
Information about each checkpoint is recorded in the alert_SID.log file if the LOG_CHECKPOINTS_TO_ALERT initialization parameter is set to TRUE. The default value of FALSE  for this parameter does not log checkpoint.


Forcing Log Switches and Checkpoints :

  • Forcing a log Switch by using Command :

 SQL> ALTER SYSTEM SWITCH LOGFILE;

  • Checkpoint Can be Forced by :

- Setting FAST_START_MTTR_TARGET parameter.

FAST_START_MTTR_TARGET = 600; 

  • ALTER SYSTEM CHECKPOINT  command.

SQL> ALTER SYSTEM CHECKPOINT;i


Log Switches and checkpoints are automatically performed at certain points in the operation of the database as identified previously.  However, a DBA can force a log switch or a checkpoint to occur.

Forcing Checkpoint :

FAST_START_MTTR_TARGET   Parameter replaces the deprecated parameters:

  • FAST_START_IO_TARGET
  • LOG_CHECKPOINT_TIMEOUT
These deprecated parameters must not be uses if the parameter FAST_START_MTTR_TARGET is used.

In the above example, the FAST_START_MTTR_TARGET parameter has been set 600 seconds, so that instance recovery should not take more than 600 seconds. The Database will adjust the other parameters to this goal.


Thank you !!

Friday, September 4, 2020

Control File, Multiplexing of Control File

Hey Folks,

We shall deep dive to know more about the Control file and its function and Why the control file is a very important component and what is the use of it. Here we shall know about the Control  File Multiplexing method and steps.

 

Control File :

Control file Characteristics is as below : 

  • A small binary file
  • Defines the current state of the physical database
  • Maintains the integrity of the database
  • Required :

-  At MOUNT state during database startup

-  To  operate the Database

  • Linked to a single database
  • Loss May require recovery
  • Sized initially by CREATE DATABASE

The control file is a small binary file, it is necessary for the database to start and operate successfully. Each control file is associated with only one Oracle Database. Before a database is opened, the control file is read to determine whether the database is in a valid state to use.

A control file is updated continuously by Oracle Server during Database use, So it must be available for writing whenever the database is open. The information in the control file can be modified only by the Oracle Server, no database administrator or end-user can edit the control file.

If for some reason the control file is not accessible, the database does not function properly. If all copies of a Database's control file are lost, the database must be recovered before it can be opened.

 

Control File Contents : 

A control file contains the Fallowing entries :

  • Database name and identifier
  • Time stamp of the Database Creation
  • Tablespace names
  • Names and locations of data files and online redo log files
  • Current Online redo log file sequence number
  • Checkpoint information
  • Begin and End of UNDO segments
  • Redo log Archive information
  • Backup Information

The information in the Control file includes the following OR when the control file entries are updated: 

- The database name is taken from either the name specified by the initialization parameter DB_NAME or the name used in the CREATE DATABASE statement.

- A database identifier is recorded when the database is created.

- Timestamp of database creation is also recorded at the time of database creation.

- Name and location of Associated data files and online redo log files are updated when a data file or online redo log file is added to, renamed in, or dropped from the database.

- Tablespace information is updated as tablespaces are added or dropped.  

- Online redo log file history is recorded during log switches.

- Location and status of archive logs are recorded when archiving occurs.

- Location and status for backups are recorded by the Recovery Manager (RMAN) utility.

- Current log sequence number is recorded when log switches occur.

- Checkpoint Information is recorded as checkpoints are made. 

 

 The Control file consists of two types of sections :

  • Reusable
  • Not reusable

Reusable sections store RMAN information, Such as backup data file names and backup online redo log file names. They are used in a circular manner and can be reused by only RMAN.


 Multiplexing the Control File:

To Safeguard against a single point failure of the control file, It is strongly recommended that the control file be Multiplexed, Storing each copy in a different physical disks. If a control file is lost, a multiplexed copy of the control file can be used to restart the instance without database recovery.

Control files can be multiplexed up to Eight times by : 

  • Creating multiple control files when the database is created by including the control file names and full path in the initialization Parameter file : 

CONTROL_FILES = $HOME/ORADATA/u01/ctrl01.ctl,

                                                  $HOME/ORADATA/u02/ctrl02.ctl

  • Adding a control file after the database is created.

 

Backing Up the Control File :

Because the control file records the physical structure of the database, you should immediately make a backup of your control file after making changes to the physical structure of the Database.

 

Multiplexing the Control File When Using SPFILE :

For Multiplexing the control file using SPFILE, Below are the steps have to follow :

  1. Alter the SPFILE : 

Alter the SPFILE using the ALTER SYSTEM SET command, alter the SPFILE to include a list of all control files to be used: Main Control file and multiplexed copies. Below is the Command: 

     SQL> ALTER SYSTEM SET control files = 

    '$HOME/ORADATA/u01/ctrl01.ctl','$HOME/ORADATA/u02/ctrl02.ctl'       SCOPE=SPFILE;

      2. Shut Down the Database : 

Shutdown the database in order to create the additional control files on the operating system.  

     SQL>  shutdown immediate

      3. Create Additional control files : 

Create additional control files using the operating system copy command, Create the additional control files as required and verify that the files have been created in the appropriate directories.

      cp $HOME/ORADATA/u01/ctrl01.ctl $HOME/ORADATA/u02/ctrl02.ctl

      4. Start The Database: 

When the database is started the SPFILE will be read and the Oracle Server will maintain all the control files listed in the CONTROL_FILES parameter.

      SQL> startup


Multiplexing the Control File When Using PFILE:

  1. Shut Down the Database:

Shut Down the Database in order to create additional control files on the operating system.

SQL> shutdown immediate

       2. Create Additional control files:

Create additional control file using the operating system copy command, Create the additional control files as required and verify that the files have been created in the appropriate directories.

cp $HOME/ORADATA/u01/ctrl01.ctl $HOME/ORADATA/u02/ctrl02.ctl

       3. Add control file names to PFILE : 

 Alter the PFILE  to Add names of all the control files.

CONTROL_FILES = $HOME/ORADATA/u01/ctrl01.ctl,

                                       $HOME/ORADATA/u02/ctrl02.ctl  

       4. Start the database :

When the database is started the PFILE will be read and the Oracle server will maintain all the control files listed in the CONTROL_FILES parameter.

SQL> startup

 

Managing Control File with OMF :

  • OMF (Oracle Managed File ) is created if the CONTROL_FILES parameter is not specified.
  • Locations are defined by  DB_CREATE_ONLINE_LOG_DEST_n.
  • Names are uniquely generated and displayed in the  alertSID.log.

Control files are created as OMF automatically during database creation time if the CONTROL_FILES parameter is not specified in the initialization parameter file.

If using an init.ora file,  the  CONTROL_FILES parameter must be set to the OMF generated Names, Which can be found by selecting from V$CONTROLFILE  or from the alertSID.log. If an SPFILE is used, the CONTROL_FILES parameter is automatically set and saved when the database is created.

The locations of the control files are determined by the DB_CREATE_ONLINE_LOG_DEST_n Parameter. If this parameter is not set the control files location will be defined by the DB_CREATE_FILE_DEST Parameter. If neither of these parameter sets, then the control file will not be of OMF. If the control files are not of OMF then the CONTROL_FILES parameter must be set in the initialization parameter file or An error will be received.

The control names are uniquely generated (ora_cmr7t30p.ctl)  and displayed in the alertSID.log when the files are created


Obtaining Control File Information :

Information About Control File Status and locations can be retrieved by querying the following views.

  • V$CONTROLFILE: Lists the name and status of all the control files associated with the instance.
  • V$PARAMETER: Lists status and location of all parameters.
  • V$CONTROLFILE_RECORD_SECTION: Provides information about the control file record sections
  • SHOW PARAMETER CONTROL_FILES: Lists the Name, Status, and Location of the control files.

See below a few Examples to Obtaining Control file Information :

  •  To Obtain the location and names of the control files, Query the V$CONTROLFILE view,

 SQL> SELECT NAME FROM V$CONTROLFILE; 

NAME

----------------------------------------

/u02/oraprod/proddata/cntrl01.dbf

/u02/oraprod/proddata/cntrl02.dbf

/u02/oraprod/proddata/cntrl03.dbf

 

  • You can also query the V$PARAMETER view.

         SQL> SELECT name, value from V$PARAMETER where name =

'control_files';

NAME            VALUE

--------------- --------------------------------------

control_files   /u02/oraprod/proddata/cntrl01.dbf,

                    /u02/oraprod/proddata/cntrl02.dbf, 

                    /u02/oraprod/proddata/cntrl03.dbf                                                                           

 

  • To Obtain Information about the different sections of the control files, query the V$CONTROLFILE_RECORD_SECTION view.

SQL> SELECT type, record_size, records_total, records_used FROM v$controlfile_record_section WHERE TYPE='DATAFILE';

TYPE              RECORD_SIZE RECORDS_TOTAL RECORDS_USED

----------------- ----------- ------------- ------------

DATAFILE             520           512           62

 

The RCORDS_TOTAL column specifies the number of records allocated for a special section.


  •  To SHOW PARAMETER command can also be used to find the location of the control files.

 SQL> SHOW PARAMETER control_file;

NAME                       TYPE        VALUE

-------------------------- ----------- ------------------------

control_file_record_keep_time  integer     7

control_files     string      /u02/oraprod/proddata/cntrl01.dbf,

                                  /u02/oraprod/proddata/cntrl02.dbf,                                   /u02/oraprod/proddata/cntrl03.dbf                                 

Information In Several dynamic performance views is obtained from the control files.  See below List is for examples :

  • V$BACKUP
  • V$DATAFILE
  • V$TEMPFILE
  • V$TABLESPACE
  • V$ARCHIVE
  • V$LOG
  • V$LOGFILE
  • V$LOGHIST
  • V$ARCHIVED_LOG
  • V$DATABASE

 

Thank you !!