Tuesday, December 4, 2018

Switchover and Switchback between Primary RAC and Physical Standby (Single Instance)


Switchover and Switchback between Primary RAC and Physical Standby (Single Instance):
========================================================================================


Check the instances in Primary RAC
=====================================


[oracle@rac-p1 ~]$ ps -ef | grep pmon
oracle    4770     1  0 13:16 ?          00:00:00 asm_pmon_+ASM1
oracle    7833     1  0 13:18 ?          00:00:00 ora_pmon_racp1
oracle   22071 21019  0 15:13 pts/1     00:00:00 grep pmon
[oracle@rac-p1 ~]$

[oracle@rac-p1 ~]$ srvctl status database -d racp
Instance racp1 is running on node rac-p1
Instance racp2 is running on node rac-p2
[oracle@rac-p1 ~]$

Shutdown the 2nd instance of Primary RAC
============================================


[oracle@rac-p1 ~]$ srvctl stop instance -d racp -i racp2
[oracle@rac-p1 ~]$

[oracle@rac-p1 ~]$ srvctl status database -d racp
Instance racp1 is running on node rac-p1
Instance racp2 is not running on node rac-p2
[oracle@rac-p1 ~]$

Login to Primary RAC and start "switcover" process 
=====================================================


[oracle@rac-p1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Tue Dec 4 15:14:27 2018
Copyright (c) 1982, 2014, Oracle.  All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL> select open_mode,database_role from gv$database;

OPEN_MODE            DATABASE_ROLE
--------------------     ----------------
READ WRITE            PRIMARY

SQL> select switchover_status from v$database;

SWITCHOVER_STATUS
--------------------
NOT ALLOWED

SQL> show parameter log_archive_dest_2

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_2                   string

SQL> alter system set log_archive_dest_2='SERVICE="racs" LGWR ASYNC NOAFFIRM 
VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) 
delay=0 optional compression=disable max_failure=0 
max_connections=1 reopen=300 db_unique_name="racs" net_timeout=30' scope=both sid='*';

System altered.

Note-1: If DG broker is in place we have to set some of the parameters in Primary RAC and Physical Standby 
Note-2: If “Sessions Active”:  alter database commit to switchover to physical standby with session shutdown;

SQL> select switchover_status from v$database;

SWITCHOVER_STATUS
--------------------
TO STANDBY

SQL> alter database commit to switchover to physical standby;
Database altered.

SQL> shut immediate;
ORA-01012: not logged on

[oracle@rac-p1 ~]$ . oraenv
ORACLE_SID = [racp1] ?
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@rac-p1 ~]$

[oracle@rac-p1 ~]$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.2.0 Production on Tue Dec 4 15:21:25 2018
Copyright (c) 1982, 2014, Oracle.  All rights reserved.
Connected to an idle instance.

SQL> startup mount;
ORACLE instance started.

Total System Global Area  943718400 bytes
Fixed Size                  2931136 bytes
Variable Size             411043392 bytes
Database Buffers          524288000 bytes
Redo Buffers                5455872 bytes
Database mounted.

SQL> select db_unique_name,open_mode,database_role from gv$database;

DB_UNIQUE_NAME                 OPEN_MODE            DATABASE_ROLE
------------------------------ -------------------- ----------------
racp                           MOUNTED              PHYSICAL STANDBY

SQL> alter database open;
Database altered.

SQL> alter database recover managed standby database disconnect from session;
Database altered.

SQL> select db_unique_name,open_mode,database_role from gv$database;

DB_UNIQUE_NAME              OPEN_MODE             DATABASE_ROLE
------------------------------  ----------------------------  --------------------
racp                            READ ONLY WITH APPLY  PHYSICAL STANDBY

Switchover Single Instance from Physical Standby to Primary role
====================================================================


SQL> select db_unique_name,open_mode,database_role from gv$database;

DB_UNIQUE_NAME                 OPEN_MODE            DATABASE_ROLE
------------------------------ -------------------- ----------------
racs                           READ ONLY WITH APPLY PHYSICAL STANDBY

SQL> select switchover_status from v$database;

SWITCHOVER_STATUS
--------------------
TO PRIMARY

SQL> alter database commit to switchover to primary;
Database altered.

SQL> shu immediate;
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.

SQL> startup mount;
ORACLE instance started.

Total System Global Area  3338665984 bytes
Fixed Size                   2929888 bytes
Variable Size              805309216 bytes
Database Buffers          2516582400 bytes
Redo Buffers                13844480 bytes
Database mounted.

SQL> select db_unique_name,open_mode,database_role from gv$database;

DB_UNIQUE_NAME              OPEN_MODE            DATABASE_ROLE
------------------------------  -------------------- ----------------
racs                            MOUNTED              PRIMARY

SQL> alter database open;
Database altered.

SQL> select db_unique_name,open_mode,database_role from gv$database;

DB_UNIQUE_NAME             OPEN_MODE            DATABASE_ROLE
------------------------------ --------------------    ----------------
racs                                READ WRITE            PRIMARY
 

Checking the transactions between Single Instance (Primary Database)
to RAC (Physical Standby Instance)
=======================================================================


SQL> insert into scott.dept values (90,'ORACLE','US');
1 row created.

SQL> commit;
Commit complete.

SQL> alter system switch logfile;
System altered.

SQL> alter system switch logfile;
System altered.

SQL> show parameter log_archive_dest_2

NAME                                  TYPE        VALUE
------------------------------------  ----------- ------------------------------
log_archive_dest_2                    string

SQL> alter system set log_archive_dest_2='SERVICE="racp" LGWR ASYNC NOAFFIRM 
VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) 
delay=0 optional compression=disable max_failure=0 max_connections=1 
reopen=300 db_unique_name="racp" net_timeout=30' scope=both;
System altered.

SQL> alter system switch logfile;
System altered.

SQL> alter system switch logfile;
System altered.

Check the transactions in RAC (Physical Standby Database)
============================================================
SQL> select * from scott.dept where deptno=90;

DEPTNO  DNAME           LOC
---------- --------------  -------------
90  ORACLE          US


Switchback Single Instance (racs) from Primary role to Physical Standby Database role
========================================================================================


SQL> select switchover_status from v$database;

SWITCHOVER_STATUS
--------------------
TO STANDBY

SQL> select db_unique_name,open_mode,database_role from v$database;

DB_UNIQUE_NAME            OPEN_MODE            DATABASE_ROLE
------------------------------ --------------------   ----------------
racs                                READ WRITE           PRIMARY

SQL> alter database commit to switchover to physical standby;
Database altered.

SQL> shutdown immediate;
ORA-01012: not logged on

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
[oracle@rac-stdby ~]$

[oracle@rac-stdby ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Tue Dec 4 15:33:46 2018
Copyright (c) 1982, 2014, Oracle.  All rights reserved.
Connected to an idle instance.

SQL> startup mount;
ORACLE instance started.

Total System Global Area  3338665984 bytes
Fixed Size                   2929888 bytes
Variable Size              805309216 bytes
Database Buffers          2516582400 bytes
Redo Buffers                13844480 bytes
Database mounted.

SQL> alter database open;
Database altered.

SQL> select db_unique_name,open_mode,database_role from v$database;

DB_UNIQUE_NAME              OPEN_MODE            DATABASE_ROLE
------------------------------  --------------------     ----------------------
racs                            READ ONLY             PHYSICAL STANDBY

SQL> alter database recover managed standby database disconnect from session;
Database altered.

SQL> select db_unique_name,open_mode,database_role from v$database;

DB_UNIQUE_NAME              OPEN_MODE             DATABASE_ROLE
------------------------------  ----------------------------      -------------------
racs                            READ ONLY WITH APPLY  PHYSICAL STANDBY

Switchback Primary RAC from Physical Standby Database role to Primary Database role
======================================================================================


[oracle@rac-p1 ~]$ srvctl status database -d racp
Instance racp1 is running on node rac-p1
Instance racp2 is not running on node rac-p2
[oracle@rac-p1 ~]$ exit

SQL> select db_unique_name,open_mode,database_role from gv$database;

DB_UNIQUE_NAME            OPEN_MODE             DATABASE_ROLE
------------------------------ ---------------------------  ---------------------
racp                               READ ONLY WITH APPLY  PHYSICAL STANDBY

SQL> select switchover_status from v$database;

SWITCHOVER_STATUS
--------------------
TO PRIMARY

SQL> alter database commit to switchover to primary;
Database altered.

SQL> shutdown immediate;
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.

SQL> startup mount;
ORACLE instance started.

Total System Global Area   943718400 bytes
Fixed Size                   2931136 bytes
Variable Size              411043392 bytes
Database Buffers           524288000 bytes
Redo Buffers                 5455872 bytes
Database mounted.

SQL> select db_unique_name,open_mode,database_role from gv$database;

DB_UNIQUE_NAME              OPEN_MODE         DATABASE_ROLE
------------------------------  -------------------- ----------------
racp                            MOUNTED            PRIMARY

SQL> alter database open;
Database altered.

SQL> select db_unique_name,open_mode,database_role from gv$database;

DB_UNIQUE_NAME             OPEN_MODE           DATABASE_ROLE
------------------------------  --------------------    ----------------
racp                            READ WRITE           PRIMARY

SQL> !

Note: Once Switchover and Switchback done start the second instance of Primary RAC

[oracle@rac-p1 ~]$ srvctl status database -d racp
Instance racp1 is running on node rac-p1
Instance racp2 is not running on node rac-p2
[oracle@rac-p1 ~]$

[oracle@rac-p1 ~]$ srvctl start instance  -d racp -i racp2
[oracle@rac-p1 ~]$

[oracle@rac-p1 ~]$ srvctl status database -d racp
Instance racp1 is running on node rac-p1
Instance racp2 is running on node rac-p2
[oracle@rac-p1 ~]$

Check the transactions between Primary RAC to Physical Standby (Single Instance)
=====================================================================
SQL> select db_unique_name,open_mode,database_role from gv$database;

DB_UNIQUE_NAME              OPEN_MODE             DATABASE_ROLE
------------------------------  --------------------   ----------------
racp                            READ WRITE            PRIMARY
racp                            READ WRITE            PRIMARY

SQL> insert into scott.dept values (92,'ORACLE','US');
1 row created.

SQL> commit;
Commit complete.

SQL> alter system switch all logfile;
System altered.

SQL> alter system switch all logfile;
System altered.

Login to Single Instance (Physical Standby Database)
======================================================= 

SQL> select db_unique_name,open_mode,database_role from v$database;

DB_UNIQUE_NAME              OPEN_MODE             DATABASE_ROLE
------------------------------  ----------------------------  ---------------------
racs                            READ ONLY WITH APPLY  PHYSICAL STANDBY

SQL> select * from scott.dept where deptno=92 order by deptno;

DEPTNO     DNAME           LOC
---------- --------------  -------------
92     ORACLE          US 

SQL>

Hope it helps.... I will try the same process using 'dgmgrl' and let see......

If any comments also please welcome ...... I will try to modify...

Monday, August 6, 2018

Upgrading Grid Infrastructure and RDBMS from Oracle 12cR2 (12.2.0) to Oracle 18c in Oracle Exadata x7-2


I have covered major steps here.

Please refer for more details and pre-reqs 
https://mikedietrichde.com/2018/06/18/upgrade-oracle-12-2-0-1-to-oracle-database-18c-on-premises/

Patches to apply before upgrading Oracle GI and DB to 18c or downgrading to previous release (Doc ID 2414935.1)

1. If you have applied the below Jan 2018 PSU Patch on Oracle 12cR2 (12.2.0) it's good to go. It's includes the patch "27006180".
 a. p27100009_122010_Linux-x86-64.zip (GI patch)
 b. p27105253_122010_Linux-x86-64.zip (RDBMS patch)

[grid@rac1-12cr2 OPatch]$ ./opatch lsinventory | grep 27006180
     27068526, 27006180, 26755613, 26433979, 25698636, 25644425, 26664476

Note: If not applied above patches while upgrade we will get pre-reqs screen to apply the below patch
 a. p27006180_122010_Linux-x86-64.zip



After Jan 2018 PSU Patch Installation 




Node-1: rac1-12cr2
=====================
[root@rac1-12cr2 ~]# sh /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@rac1-12cr2 ~]#
[root@rac1-12cr2 ~]# exit

Node-2: rac2-12cr2
=====================
[root@rac2-12cr2 ~]# sh /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@rac2-12cr2 ~]#

Node-1: rac1-12cr2 (rootupgrade.sh)
=====================================
[root@rac1-12cr2 ~]# sh /u01/app/18.0.0/grid/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/18.0.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/18.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/rac1-12cr2/crsconfig/rootcrs_rac1-12cr2_2018-08-06_10-30-08PM.log
2018/08/06 22:30:32 CLSRSC-595: Executing upgrade step 1 of 19: 'UpgradeTFA'.
2018/08/06 22:30:32 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2018/08/06 22:32:14 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2018/08/06 22:32:14 CLSRSC-595: Executing upgrade step 2 of 19: 'ValidateEnv'.
2018/08/06 22:32:26 CLSRSC-595: Executing upgrade step 3 of 19: 'GetOldConfig'.
2018/08/06 22:32:26 CLSRSC-464: Starting retrieval of the cluster configuration data
2018/08/06 22:32:40 CLSRSC-692: Checking whether CRS entities are ready for upgrade. This operation may take a few minutes.
2018/08/06 22:34:19 CLSRSC-693: CRS entities validation completed successfully.
2018/08/06 22:34:26 CLSRSC-515: Starting OCR manual backup.
2018/08/06 22:34:38 CLSRSC-516: OCR manual backup successful.
2018/08/06 22:34:47 CLSRSC-486:
 At this stage of upgrade, the OCR has changed.
 Any attempt to downgrade the cluster after this point will require a complete cluster outage to restore the OCR.
2018/08/06 22:34:48 CLSRSC-541:
 To downgrade the cluster:
 1. All nodes that have been upgraded must be downgraded.
2018/08/06 22:34:48 CLSRSC-542:
 2. Before downgrading the last node, the Grid Infrastructure stack on all other cluster nodes must be down.
2018/08/06 22:34:48 CLSRSC-615:
 3. The last node to downgrade cannot be a Leaf node.
2018/08/06 22:34:59 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2018/08/06 22:34:59 CLSRSC-595: Executing upgrade step 4 of 19: 'GenSiteGUIDs'.
2018/08/06 22:35:02 CLSRSC-595: Executing upgrade step 5 of 19: 'UpgPrechecks'.
2018/08/06 22:35:08 CLSRSC-363: User ignored prerequisites during installation
2018/08/06 22:35:21 CLSRSC-595: Executing upgrade step 6 of 19: 'SaveParamFile'.
2018/08/06 22:35:35 CLSRSC-595: Executing upgrade step 7 of 19: 'SetupOSD'.
2018/08/06 22:35:35 CLSRSC-595: Executing upgrade step 8 of 19: 'PreUpgrade'.
2018/08/06 22:37:17 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode
2018/08/06 22:37:17 CLSRSC-482: Running command: '/u01/app/12.2.0/grid/bin/crsctl start rollingupgrade 18.0.0.0.0'
CRS-1131: The cluster was successfully set to rolling upgrade mode.
2018/08/06 22:37:21 CLSRSC-482: Running command: '/u01/app/18.0.0/grid/bin/asmca -silent -upgradeNodeASM -nonRolling false -oldCRSHome /u01/app/12.2.0/grid 
-oldCRSVersion 12.2.0.1.0 -firstNode true -startRolling false '

ASM configuration upgraded in local node successfully.

2018/08/06 22:37:54 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode
2018/08/06 22:38:02 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2018/08/06 22:38:56 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
2018/08/06 22:39:05 CLSRSC-595: Executing upgrade step 9 of 19: 'CheckCRSConfig'.
2018/08/06 22:39:06 CLSRSC-595: Executing upgrade step 10 of 19: 'UpgradeOLR'.
2018/08/06 22:39:25 CLSRSC-595: Executing upgrade step 11 of 19: 'ConfigCHMOS'.
2018/08/06 22:39:25 CLSRSC-595: Executing upgrade step 12 of 19: 'UpgradeAFD'.
2018/08/06 22:41:17 CLSRSC-595: Executing upgrade step 13 of 19: 'createOHASD'.
2018/08/06 22:41:34 CLSRSC-595: Executing upgrade step 14 of 19: 'ConfigOHASD'.
2018/08/06 22:41:34 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
2018/08/06 22:42:30 CLSRSC-595: Executing upgrade step 15 of 19: 'InstallACFS'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1-12cr2'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1-12cr2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2018/08/06 22:43:39 CLSRSC-595: Executing upgrade step 16 of 19: 'InstallKA'.
2018/08/06 22:43:57 CLSRSC-595: Executing upgrade step 17 of 19: 'UpgradeCluster'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1-12cr2'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1-12cr2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.evmd' on 'rac1-12cr2'
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1-12cr2'
CRS-2676: Start of 'ora.evmd' on 'rac1-12cr2' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'rac1-12cr2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1-12cr2'
CRS-2676: Start of 'ora.gpnpd' on 'rac1-12cr2' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1-12cr2'
CRS-2676: Start of 'ora.gipcd' on 'rac1-12cr2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1-12cr2'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1-12cr2' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'rac1-12cr2'
CRS-2672: Attempting to start 'ora.cssd' on 'rac1-12cr2'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1-12cr2'
CRS-2676: Start of 'ora.diskmon' on 'rac1-12cr2' succeeded
CRS-2676: Start of 'ora.crf' on 'rac1-12cr2' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1-12cr2' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac1-12cr2'
CRS-2672: Attempting to start 'ora.ctssd' on 'rac1-12cr2'
CRS-2676: Start of 'ora.ctssd' on 'rac1-12cr2' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac1-12cr2'
CRS-2676: Start of 'ora.crsd' on 'rac1-12cr2' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac1-12cr2' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac1-12cr2'
CRS-2676: Start of 'ora.asm' on 'rac1-12cr2' succeeded
CRS-6023: Starting Oracle Cluster Ready Services-managed resources
CRS-6017: Processing resource auto-start for servers: rac1-12cr2
CRS-2673: Attempting to stop 'ora.rac1-12cr2.vip' on 'rac2-12cr2'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'rac2-12cr2'
CRS-2672: Attempting to start 'ora.ons' on 'rac1-12cr2'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'rac2-12cr2' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'rac2-12cr2'
CRS-2677: Stop of 'ora.rac1-12cr2.vip' on 'rac2-12cr2' succeeded
CRS-2672: Attempting to start 'ora.rac1-12cr2.vip' on 'rac1-12cr2'
CRS-2677: Stop of 'ora.scan1.vip' on 'rac2-12cr2' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'rac1-12cr2'
CRS-2676: Start of 'ora.scan1.vip' on 'rac1-12cr2' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'rac1-12cr2'
CRS-2676: Start of 'ora.ons' on 'rac1-12cr2' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'rac1-12cr2' succeeded
CRS-2676: Start of 'ora.rac1-12cr2.vip' on 'rac1-12cr2' succeeded
CRS-2672: Attempting to start 'ora.LISTENER.lsnr' on 'rac1-12cr2'
CRS-2676: Start of 'ora.LISTENER.lsnr' on 'rac1-12cr2' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac1-12cr2'
CRS-2676: Start of 'ora.asm' on 'rac1-12cr2' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'rac1-12cr2'
CRS-2676: Start of 'ora.DATA.dg' on 'rac1-12cr2' succeeded
CRS-2672: Attempting to start 'ora.orcl.db' on 'rac1-12cr2'
CRS-2674: Start of 'ora.orcl.db' on 'rac1-12cr2' failed
CRS-2679: Attempting to clean 'ora.orcl.db' on 'rac1-12cr2'
CRS-2678: 'ora.orcl.db' on 'rac1-12cr2' has experienced an unrecoverable failure
CRS-0267: Human intervention required to resume its availability.
CRS-5802: Unable to start the agent process
CRS-6016: Resource auto-start has completed for server rac1-12cr2
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2018/08/06 22:57:33 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 2.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2018/08/06 22:57:58 CLSRSC-595: Executing upgrade step 18 of 19: 'UpgradeNode'.
2018/08/06 22:58:04 CLSRSC-474: Initiating upgrade of resource types
2018/08/06 22:58:34 CLSRSC-475: Upgrade of resource types successfully initiated.
2018/08/06 22:58:53 CLSRSC-595: Executing upgrade step 19 of 19: 'PostUpgrade'.
2018/08/06 22:59:02 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@rac1-12cr2 ~]#

Node-2: rac2-12cr2 (rootupgrade.sh)
======================================

[root@rac2-12cr2 ~]# sh /u01/app/18.0.0/grid/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/18.0.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/18.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/rac2-12cr2/crsconfig/rootcrs_rac2-12cr2_2018-08-06_11-01-13PM.log
2018/08/06 23:01:21 CLSRSC-595: Executing upgrade step 1 of 19: 'UpgradeTFA'.
2018/08/06 23:01:21 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2018/08/06 23:03:02 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2018/08/06 23:03:02 CLSRSC-595: Executing upgrade step 2 of 19: 'ValidateEnv'.
2018/08/06 23:03:04 CLSRSC-595: Executing upgrade step 3 of 19: 'GetOldConfig'.
2018/08/06 23:03:05 CLSRSC-464: Starting retrieval of the cluster configuration data
2018/08/06 23:04:34 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2018/08/06 23:04:34 CLSRSC-595: Executing upgrade step 4 of 19: 'GenSiteGUIDs'.
2018/08/06 23:04:34 CLSRSC-595: Executing upgrade step 5 of 19: 'UpgPrechecks'.
2018/08/06 23:04:35 CLSRSC-363: User ignored prerequisites during installation
2018/08/06 23:04:37 CLSRSC-595: Executing upgrade step 6 of 19: 'SaveParamFile'.
2018/08/06 23:04:39 CLSRSC-595: Executing upgrade step 7 of 19: 'SetupOSD'.
2018/08/06 23:04:40 CLSRSC-595: Executing upgrade step 8 of 19: 'PreUpgrade'.

ASM configuration upgraded in local node successfully.

2018/08/06 23:05:11 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2018/08/06 23:06:10 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
2018/08/06 23:07:29 CLSRSC-595: Executing upgrade step 9 of 19: 'CheckCRSConfig'.
2018/08/06 23:07:30 CLSRSC-595: Executing upgrade step 10 of 19: 'UpgradeOLR'.
2018/08/06 23:07:34 CLSRSC-595: Executing upgrade step 11 of 19: 'ConfigCHMOS'.
2018/08/06 23:07:34 CLSRSC-595: Executing upgrade step 12 of 19: 'UpgradeAFD'.
2018/08/06 23:09:10 CLSRSC-595: Executing upgrade step 13 of 19: 'createOHASD'.
2018/08/06 23:09:11 CLSRSC-595: Executing upgrade step 14 of 19: 'ConfigOHASD'.
2018/08/06 23:09:11 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
2018/08/06 23:09:40 CLSRSC-595: Executing upgrade step 15 of 19: 'InstallACFS'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac2-12cr2'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac2-12cr2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2018/08/06 23:10:34 CLSRSC-595: Executing upgrade step 16 of 19: 'InstallKA'.
2018/08/06 23:10:35 CLSRSC-595: Executing upgrade step 17 of 19: 'UpgradeCluster'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac2-12cr2'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac2-12cr2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.evmd' on 'rac2-12cr2'
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac2-12cr2'
CRS-2676: Start of 'ora.mdnsd' on 'rac2-12cr2' succeeded
CRS-2676: Start of 'ora.evmd' on 'rac2-12cr2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac2-12cr2'
CRS-2676: Start of 'ora.gpnpd' on 'rac2-12cr2' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac2-12cr2'
CRS-2676: Start of 'ora.gipcd' on 'rac2-12cr2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac2-12cr2'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac2-12cr2' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'rac2-12cr2'
CRS-2672: Attempting to start 'ora.cssd' on 'rac2-12cr2'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac2-12cr2'
CRS-2676: Start of 'ora.diskmon' on 'rac2-12cr2' succeeded
CRS-2676: Start of 'ora.crf' on 'rac2-12cr2' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac2-12cr2' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac2-12cr2'
CRS-2672: Attempting to start 'ora.ctssd' on 'rac2-12cr2'
CRS-2676: Start of 'ora.ctssd' on 'rac2-12cr2' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac2-12cr2'
CRS-2676: Start of 'ora.crsd' on 'rac2-12cr2' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac2-12cr2' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac2-12cr2'
CRS-2676: Start of 'ora.asm' on 'rac2-12cr2' succeeded
CRS-6023: Starting Oracle Cluster Ready Services-managed resources
CRS-6017: Processing resource auto-start for servers: rac2-12cr2
CRS-2673: Attempting to stop 'ora.rac2-12cr2.vip' on 'rac1-12cr2'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'rac1-12cr2'
CRS-2672: Attempting to start 'ora.ons' on 'rac2-12cr2'
CRS-2677: Stop of 'ora.rac2-12cr2.vip' on 'rac1-12cr2' succeeded
CRS-2672: Attempting to start 'ora.rac2-12cr2.vip' on 'rac2-12cr2'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'rac1-12cr2' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'rac1-12cr2'
CRS-2677: Stop of 'ora.scan1.vip' on 'rac1-12cr2' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'rac2-12cr2'
CRS-2676: Start of 'ora.scan1.vip' on 'rac2-12cr2' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'rac2-12cr2'
CRS-2676: Start of 'ora.ons' on 'rac2-12cr2' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'rac2-12cr2' succeeded
CRS-2676: Start of 'ora.rac2-12cr2.vip' on 'rac2-12cr2' succeeded
CRS-2672: Attempting to start 'ora.LISTENER.lsnr' on 'rac2-12cr2'
CRS-2676: Start of 'ora.LISTENER.lsnr' on 'rac2-12cr2' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac2-12cr2'
CRS-2676: Start of 'ora.asm' on 'rac2-12cr2' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'rac2-12cr2'
CRS-2676: Start of 'ora.DATA.dg' on 'rac2-12cr2' succeeded
CRS-2672: Attempting to start 'ora.orcl.db' on 'rac1-12cr2'
CRS-2672: Attempting to start 'ora.orcl.db' on 'rac2-12cr2'
CRS-2674: Start of 'ora.orcl.db' on 'rac1-12cr2' failed
CRS-2679: Attempting to clean 'ora.orcl.db' on 'rac1-12cr2'
CRS-2674: Start of 'ora.orcl.db' on 'rac2-12cr2' failed
CRS-2679: Attempting to clean 'ora.orcl.db' on 'rac2-12cr2'
CRS-2678: 'ora.orcl.db' on 'rac1-12cr2' has experienced an unrecoverable failure
CRS-0267: Human intervention required to resume its availability.
CRS-2678: 'ora.orcl.db' on 'rac2-12cr2' has experienced an unrecoverable failure
CRS-0267: Human intervention required to resume its availability.
CRS-5802: Unable to start the agent process
===== Summary of resource auto-start failures follows =====
CRS-2807: Resource 'ora.orcl.db' failed to start automatically.
CRS-6016: Resource auto-start has completed for server rac2-12cr2
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2018/08/06 23:24:09 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 2.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2018/08/06 23:25:43 CLSRSC-595: Executing upgrade step 18 of 19: 'UpgradeNode'.
Start upgrade invoked..
2018/08/06 23:25:52 CLSRSC-478: Setting Oracle Clusterware active version on the last node to be upgraded
2018/08/06 23:25:52 CLSRSC-482: Running command: '/u01/app/18.0.0/grid/bin/crsctl set crs activeversion'
Started to upgrade the active version of Oracle Clusterware. This operation may take a few minutes.
Started to upgrade CSS.
CSS was successfully upgraded.
Started to upgrade Oracle ASM.
Started to upgrade CRS.
CRS was successfully upgraded.
Successfully upgraded the active version of Oracle Clusterware.
Oracle Clusterware active version was successfully set to 18.0.0.0.0.
2018/08/06 23:26:59 CLSRSC-479: Successfully set Oracle Clusterware active version
2018/08/06 23:26:59 CLSRSC-476: Finishing upgrade of resource types
2018/08/06 23:27:00 CLSRSC-477: Successfully completed upgrade of resource types
2018/08/06 23:28:57 CLSRSC-595: Executing upgrade step 19 of 19: 'PostUpgrade'.
2018/08/06 23:29:21 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@rac2-12cr2 ~]#

Check the Version

[grid@rac1-12cr2 ~]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [18.0.0.0.0]
[grid@rac1-12cr2 ~]$ crsctl query crs softwareversion
Oracle Clusterware version on node [rac1-12cr2] is [18.0.0.0.0]
[grid@rac1-12cr2 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [18.0.0.0.0]
[grid@rac1-12cr2 ~]$

[grid@rac2-12cr2 ~]$ . oraenv
ORACLE_SID = [grid] ? +ASM2
ORACLE_HOME = [/home/oracle] ? /u01/app/18.0.0/grid
The Oracle base has been set to /u01/app/grid
[grid@rac2-12cr2 ~]$
[grid@rac2-12cr2 ~]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [18.0.0.0.0]
[grid@rac2-12cr2 ~]$ crsctl query crs softwareversion
Oracle Clusterware version on node [rac2-12cr2] is [18.0.0.0.0]
[grid@rac2-12cr2 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [18.0.0.0.0]
[grid@rac2-12cr2 ~]$


I will post RDBMS major steps for upgrade shortly ......



Sunday, August 5, 2018

To Extend ROOT directory with added disk in Oracle Linux 7.5 (xfs_growfs)


To Extend ROOT directory with added disk in Oracle Linux 7.5 (xfs_growfs)


[root@rac1-12cr2 ~]# fdisk -l

Disk /dev/sdh: 19.3 GB, 19327352832 bytes, 37748736 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sdc: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x843d33d1

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048    20971519    10484736   83  Linux

Disk /dev/sdf: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x459c80a0

   Device Boot      Start         End      Blocks   Id  System
/dev/sdf1            2048    41943039    20970496   83  Linux

Disk /dev/sdb: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x28354c5a

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048    20971519    10484736   83  Linux

Disk /dev/sde: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xb12a1689

   Device Boot      Start         End      Blocks   Id  System
/dev/sde1            2048    41943039    20970496   83  Linux

Disk /dev/sdd: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xc2ab2871

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1            2048    41943039    20970496   83  Linux

Disk /dev/sdg: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x5d17435b

   Device Boot      Start         End      Blocks   Id  System
/dev/sdg1            2048    41943039    20970496   83  Linux

Disk /dev/sda: 107.4 GB, 107374182400 bytes, 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000b547b

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     2099199     1048576   83  Linux
/dev/sda2         2099200   209715199   103808000   8e  Linux LVM

Disk /dev/mapper/ol-root: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/ol-swap: 5301 MB, 5301600256 bytes, 10354688 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/ol-home: 47.3 GB, 47303360512 bytes, 92389376 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

[root@rac1-12cr2 ~]# fdisk /dev/sdh
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0xe0dc5856.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-37748735, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-37748735, default 37748735):
Using default value 37748735
Partition 1 of type Linux and of size 18 GiB is set

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

[root@rac1-12cr2 ~]# pvcreate /dev/sdh1
  Physical volume "/dev/sdh1" successfully created.

[root@rac1-12cr2 ~]# vgdisplay
  --- Volume group ---
  VG Name               ol
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <99.00 GiB
  PE Size               4.00 MiB
  Total PE              25343
  Alloc PE / Size       25342 / 98.99 GiB
  Free  PE / Size       1 / 4.00 MiB
  VG UUID               K1vOu8-ISS5-lq6T-FNCn-FzYP-gPgp-qeuFdL

[root@rac1-12cr2 ~]# vgextend ol /dev/sdh1
  Volume group "ol" successfully extended

[root@rac1-12cr2 ~]# vgdisplay
  --- Volume group ---
  VG Name               ol
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  5
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               116.99 GiB
  PE Size               4.00 MiB
  Total PE              29950
  Alloc PE / Size       25342 / 98.99 GiB
  Free  PE / Size       4608 / 18.00 GiB
  VG UUID               K1vOu8-ISS5-lq6T-FNCn-FzYP-gPgp-qeuFdL

[root@rac1-12cr2 ~]# pvscan
  PV /dev/sda2   VG ol              lvm2 [<99.00 GiB / 4.00 MiB free]
  PV /dev/sdh1   VG ol              lvm2 [<18.00 GiB / <18.00 GiB free]
  Total: 2 [116.99 GiB] / in use: 2 [116.99 GiB] / in no VG: 0 [0   ]


[root@rac1-12cr2 ~]# lvextend /dev/ol/root /dev/sdh1
  Size of logical volume ol/root changed from 50.00 GiB (12800 extents) to <68.00 GiB (17407 extents).
  Logical volume ol/root successfully resized.

[root@rac1-12cr2 ~]# resize2fs /dev/ol/root
resize2fs 1.42.9 (28-Dec-2013)
resize2fs: Bad magic number in super-block while trying to open /dev/ol/root
Couldn't find valid filesystem superblock.

[root@rac1-12cr2 ~]# mount | grep ol-root
/dev/mapper/ol-root on / type xfs (rw,relatime,seclabel,attr2,inode64,noquota)

[root@rac1-12cr2 ~]# df -h
Filesystem           Size  Used Avail Use% Mounted on
devtmpfs             5.6G   24K  5.6G   1% /dev
tmpfs                5.7G  640M  5.0G  12% /dev/shm
tmpfs                5.7G  9.9M  5.6G   1% /run
tmpfs                5.7G     0  5.7G   0% /sys/fs/cgroup
/dev/mapper/ol-root   50G   23G   28G  45% /
/dev/sda1           1014M  217M  798M  22% /boot
/dev/mapper/ol-home   45G   42M   44G   1% /home
12.2                 448G  435G   13G  98% /media/sf_12.2
tmpfs                1.2G  4.0K  1.2G   1% /run/user/42
tmpfs                1.2G   28K  1.2G   1% /run/user/54321
/dev/sr0              56M   56M     0 100% /run/media/oracle/VBox_GAs_5.2.16
tmpfs                1.2G     0  1.2G   0% /run/user/0

[root@rac1-12cr2 ~]# xfs_growfs /dev/mapper/ol-root
meta-data=/dev/mapper/ol-root    isize=256    agcount=4, agsize=3276800 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=13107200, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=6400, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 13107200 to 17824768

[root@rac1-12cr2 ~]# df -h
Filesystem           Size  Used Avail Use% Mounted on
devtmpfs             5.6G   24K  5.6G   1% /dev
tmpfs                5.7G  640M  5.0G  12% /dev/shm
tmpfs                5.7G  9.9M  5.6G   1% /run
tmpfs                5.7G     0  5.7G   0% /sys/fs/cgroup
/dev/mapper/ol-root   68G   23G   46G  33% /
/dev/sda1           1014M  217M  798M  22% /boot
/dev/mapper/ol-home   45G   42M   44G   1% /home
12.2                 448G  435G   13G  98% /media/sf_12.2
tmpfs                1.2G  4.0K  1.2G   1% /run/user/42
tmpfs                1.2G   28K  1.2G   1% /run/user/54321
/dev/sr0              56M   56M     0 100% /run/media/oracle/VBox_GAs_5.2.16
tmpfs                1.2G     0  1.2G   0% /run/user/0
[root@rac1-12cr2 ~]#

Friday, July 27, 2018

Installed Oracle 18c with Grid Infrastructure (+ASM) with Multi-Tenant Database


Check the "lsinvetory" from Oracle Home
=========================================
[oracle@sharddb3 OPatch]$ ./opatch lsinventory
Oracle Interim Patch Installer version 12.2.0.1.14
Copyright (c) 2018, Oracle Corporation.  All rights reserved.
Oracle Home       : /u01/app/oracle/product/18.0.0/dbhome_1
Central Inventory : /u01/app/oraInventory
        from           : /u01/app/oracle/product/18.0.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.14
OUI version       : 12.2.0.4.0
Log file location : /u01/app/oracle/product/18.0.0/dbhome_1/cfgtoollogs/opatch/
opatch2018-07-27_15-17-37PM_1.log
Lsinventory Output file location : /u01/app/oracle/product/18.0.0/dbhome_1/cfgtoollogs/opatch/lsinv/
lsinventory2018-07-27_15-17-37PM.txt
--------------------------------------------------------------------------------
Local Machine Information::
Hostname: sharddb3
ARU platform id: 226
ARU platform description:: Linux x86-64
Installed Top-level Products (1):

Oracle Database 18c                                                  18.0.0.0.0
There are 1 products installed in this Oracle Home.

Interim patches (4) :

Patch  27908644     : applied on Wed Jul 18 13:44:11 EDT 2018
Unique Patch ID:  22153180
Patch description:  "UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171"
   Created on 4 May 2018, 01:21:02 hrs PST8PDT
   Bugs fixed:
     27908644

Patch  27923415     : applied on Wed Jul 18 13:41:38 EDT 2018
Unique Patch ID:  22239273
Patch description:  "OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)"
   Created on 15 Jul 2018, 10:33:22 hrs PST8PDT
   Bugs fixed:
     27304131, 27539876, 27952586, 27642235, 27636900, 27461740

Patch  28090553     : applied on Wed Jul 18 13:40:01 EDT 2018
Unique Patch ID:  22256940
Patch description:  "OCW RELEASE UPDATE 18.3.0.0.0 (28090553)"
   Created on 11 Jul 2018, 19:20:31 hrs PST8PDT
   Bugs fixed:
     12816839, 18701017, 22734786, 23698980, 23840305, 25709124, 25724089
     26299684, 26313403, 26433972, 26527054, 26586174, 26587652, 26647619
     26827699, 26860285, 26882126, 26882316, 26943660, 26996813, 27012915
     27018734, 27032726, 27034318, 27040560, 27080748, 27086406, 27092991
     27098733, 27106915, 27114112, 27121566, 27133637, 27144533, 27153755
     27166715, 27174938, 27174948, 27177551, 27177852, 27182006, 27182064
     27184253, 27204476, 27212837, 27213140, 27220610, 27222423, 27222938
     27238077, 27238258, 27249544, 27252023, 27257509, 27263677, 27265816
     27267992, 27271876, 27274143, 27285557, 27299455, 27300007, 27302415
     27309182, 27314512, 27315159, 27320985, 27334353, 27338838, 27346984
     27358232, 27362190, 27370933, 27377219, 27378959, 27379846, 27379956
     27393421, 27398223, 27399499, 27399762, 27399985, 27401618, 27403244
     27404599, 27426277, 27428790, 27430219, 27430254, 27433163, 27452897
     27458829, 27465480, 27475272, 27481406, 27481765, 27492916, 27496806
     27503318, 27503413, 27508936, 27508984, 27513114, 27519708, 27526362
     27528204, 27532009, 27534289, 27560562, 27560735, 27573154, 27573408
     27574335, 27577122, 27579969, 27581484, 27593587, 27595801, 27600706
     27609819, 27625010, 27625050, 27627992, 27654039, 27657467, 27657920
     27668379, 27682288, 27691717, 27702244, 27703242, 27708711, 27714373
     27725967, 27731346, 27734470, 27735534, 27739957, 27740854, 27747407
     27748321, 27757979, 27766679, 27768034, 27778433, 27782464, 27783059
     27786669, 27786699, 27801774, 27811439, 27839732, 27850736, 27862636
     27864737, 27865439, 27889841, 27896388, 27897639, 27906509, 27931506
     27935826, 27941514, 27957892, 27978668, 27984314, 27993298, 28023410
     28025398, 28032758, 28039471, 28039953, 28045209, 28099592, 28109698
     28174926, 28182503, 28204423, 28240153

Patch  28090523     : applied on Wed Jul 18 13:39:24 EDT 2018
Unique Patch ID:  22329768
Patch description:  "Database Release Update : 18.3.0.0.180717 (28090523)"
   Created on 14 Jul 2018, 00:03:50 hrs PST8PDT
   Bugs fixed:
     9062315, 13554903, 21547051, 21766220, 21806121, 23003564, 23310101
     24489904, 24689376, 24737581, 24925863, 25035594, 25035599, 25287072
     25348956, 25634405, 25726981, 25743479, 25824236, 25943740, 26226953
     26336101, 26423085, 26427905, 26450454, 26476244, 26598422, 26615291
     26646549, 26654411, 26731697, 26785169, 26792891, 26818960, 26822620
     26843558, 26843664, 26846077, 26894737, 26898279, 26928317, 26933599
     26956033, 26961415, 26966120, 26986173, 26992964, 27005278, 27026401
     27028251, 27030974, 27036408, 27038986, 27041253, 27044575, 27047831
     27053044, 27058530, 27060167, 27060859, 27061736, 27066451, 27066519
     27073066, 27086821, 27090765, 27101527, 27101652, 27110878, 27112686
     27119621, 27126666, 27128580, 27135647, 27143756, 27143882, 27147979
     27153641, 27155549, 27156355, 27163928, 27169796, 27181521, 27181537
     27189611, 27190851, 27193810, 27199245, 27208953, 27210038, 27210872
     27214085, 27215007, 27216224, 27221900, 27222121, 27222626, 27224987
     27226913, 27232983, 27233563, 27236052, 27236110, 27240246, 27240570
     27241221, 27241247, 27244337, 27244785, 27249215, 27250547, 27254851
     27258578, 27259386, 27259983, 27262650, 27262945, 27263276, 27263996
     27270197, 27274456, 27274536, 27275136, 27275776, 27282707, 27283029
     27283960, 27284499, 27285244, 27288230, 27292213, 27294480, 27301308
     27301568, 27302594, 27302681, 27302695, 27302711, 27302730, 27302777
     27302800, 27302960, 27304410, 27304936, 27305318, 27307868, 27310092
     27313687, 27314206, 27314390, 27318869, 27321179, 27321834, 27326204
     27329812, 27330158, 27330161, 27333658, 27333664, 27333693, 27334316
     27334648, 27335682, 27338912, 27338946, 27339115, 27339396, 27339483
     27339495, 27341036, 27345190, 27345231, 27345450, 27345498, 27346329
     27346644, 27346709, 27346949, 27347126, 27348081, 27348707, 27349393
     27352600, 27354783, 27356373, 27357773, 27358241, 27359178, 27359368
     27360126, 27364891, 27364916, 27364947, 27365139, 27365702, 27365993
     27367194, 27368850, 27372756, 27375260, 27375542, 27376871, 27378103
     27379233, 27381383, 27381656, 27384222, 27389352, 27392187, 27395404
     27395416, 27395794, 27396357, 27396365, 27396377, 27396624, 27396666
     27396672, 27396813, 27398080, 27398660, 27401637, 27405242, 27405696
     27410300, 27410595, 27412805, 27417186, 27420715, 27421101, 27422874
     27423251, 27425507, 27425622, 27426363, 27427805, 27430802, 27432338
     27432355, 27433870, 27434050, 27434193, 27434486, 27434974, 27435537
     27439835, 27441326, 27442041, 27444727, 27445330, 27445462, 27447452
     27447687, 27448162, 27450355, 27450400, 27450783, 27451049, 27451182
     27451187, 27451531, 27452760, 27453225, 27457666, 27457891, 27458164
     27459909, 27460675, 27467543, 27469329, 27471876, 27472969, 27473800
     27479358, 27483974, 27484556, 27486253, 27487795, 27489719, 27496224
     27496308, 27497950, 27498477, 27501327, 27501413, 27501465, 27502420
     27504190, 27505603, 27506774, 27508985, 27511196, 27512439, 27517818
     27518227, 27518310, 27520070, 27520900, 27522245, 27523368, 27523800
     27525909, 27532375, 27533819, 27534509, 27537472, 27544030, 27545630
     27547732, 27550341, 27551855, 27558557, 27558559, 27558861, 27560702
     27563629, 27563767, 27570318, 27577758, 27579353, 27580996, 27585755
     27585800, 27586810, 27586895, 27587672, 27591842, 27592466, 27593389
     27595973, 27599689, 27602091, 27602488, 27603841, 27604293, 27607805
     27608669, 27610269, 27613080, 27613247, 27615608, 27616657, 27617522
     27625274, 27625620, 27631506, 27634676, 27635508, 27644757, 27649707
     27652302, 27663370, 27664702, 27679488, 27679664, 27679806, 27679961
     27680162, 27680509, 27682151, 27688099, 27688692, 27690578, 27691809
     27692215, 27693713, 27697092, 27701795, 27705761, 27707544, 27709046
     27718914, 27719187, 27723002, 27726269, 27726780, 27732323, 27739006
     27740844, 27744211, 27745220, 27747869, 27748954, 27751006, 27753336
     27757567, 27772815, 27773602, 27774320, 27774539, 27779886, 27780562
     27782339, 27783289, 27786772, 27791223, 27797290, 27803665, 27807441
     27812560, 27812593, 27813267, 27815347, 27818871, 27832643, 27833369
     27834984, 27840386, 27847259, 27851757, 27861909, 27869339, 27873643
     27882176, 27892488, 27924147, 27926113, 27930478, 27934468, 27941896
     27945870, 27950708, 27952762, 27961746, 27964051, 27970265, 27971575
     27984028, 27989849, 27993289, 27994333, 27997875, 27999597, 28021205
     28022847, 28033429, 28057267, 28059199, 28072130, 28098865, 28106402
     28132287, 28169711, 28174827, 28184554, 28188330, 25929650, 28264172
--------------------------------------------------------------------------------
OPatch succeeded.

Check "lspatches" from Oracle Home
==================================
[oracle@sharddb3 OPatch]$ ./opatch lspatches
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)
28090553;OCW RELEASE UPDATE 18.3.0.0.0 (28090553)
28090523;Database Release Update : 18.3.0.0.180717 (28090523)
OPatch succeeded.
[oracle@sharddb3 OPatch]$ cd

Check the "Patch Storage" as a "grid" user
==========================================
[grid@sharddb3 ~]$ du -sh /u01/app/18.0.0/grid/.patch_storage/
2.5G    /u01/app/18.0.0/grid/.patch_storage/

[grid@sharddb3 ~]$ ls -lrth /u01/app/18.0.0/grid/.patch_storage/
total 52K
drwxr-xr-x.  4 grid oinstall 4.0K Jul 18 14:07 28090523_Jul_14_2018_00_03_50
drwxr-xr-x.  4 grid oinstall 4.0K Jul 18 14:08 28090553_Jul_11_2018_19_20_31
drwxr-xr-x.  4 grid oinstall 4.0K Jul 18 14:09 28090557_Jun_25_2018_00_35_26
drwxr-xr-x.  4 grid oinstall 4.0K Jul 18 14:09 28090564_Jul_4_2018_23_13_47
drwxr-xr-x.  4 grid oinstall 4.0K Jul 18 14:09 28256701_Jun_29_2018_03_28_30
drwxr-xr-x.  4 grid oinstall 4.0K Jul 18 14:11 27923415_Jul_15_2018_10_33_22
drwxr-xr-x. 14 grid oinstall 4.0K Jul 18 14:11 NApply
-rw-r--r--.  1 grid oinstall 6.1K Jul 18 14:12 record_inventory.txt
-rw-r--r--.  1 grid oinstall   92 Jul 18 14:12 LatestOPatchSession.properties
-rw-r--r--.  1 grid oinstall 6.3K Jul 18 14:12 interim_inventory.txt
drwxr-xr-x.  4 grid oinstall 4.0K Jul 18 14:12 27908644_May_4_2018_01_21_02
[grid@sharddb3 ~]$ exit

Check the "Patch Storage" as a "oracle" user
============================================
[oracle@sharddb3 ~]$ du -sh /u01/app/oracle/product/18.0.0/dbhome_1/.patch_storage/
1.4G    /u01/app/oracle/product/18.0.0/dbhome_1/.patch_storage/

[oracle@sharddb3 ~]$ ls -lrth /u01/app/oracle/product/18.0.0/dbhome_1/.patch_storage/
total 40K
drwxr-xr-x. 4 oracle oinstall 4.0K Jul 18 13:39 28090523_Jul_14_2018_00_03_50
drwxr-xr-x. 4 oracle oinstall 4.0K Jul 18 13:40 28090553_Jul_11_2018_19_20_31
drwxr-xr-x. 4 oracle oinstall 4.0K Jul 18 13:41 27923415_Jul_15_2018_10_33_22
drwxr-xr-x. 8 oracle oinstall 4.0K Jul 18 13:43 NApply
-rw-r--r--. 1 oracle oinstall 5.7K Jul 18 13:43 record_inventory.txt
-rw-r--r--. 1 oracle oinstall   92 Jul 18 13:43 LatestOPatchSession.properties
-rw-r--r--. 1 oracle oinstall 5.8K Jul 18 13:43 interim_inventory.txt
drwxr-xr-x. 4 oracle oinstall 4.0K Jul 18 13:44 27908644_May_4_2018_01_21_02
[oracle@sharddb3 ~]$

Check the "software version/release version"  using "crsctl"
===========================================================
[root@sharddb3 ~]# crsctl query has softwareversion
Oracle High Availability Services version on the local node is [18.0.0.0.0]

[root@sharddb3 ~]# crsctl query has releasepatch
Oracle Clusterware release patch level is [70732493] and 
the complete list of patches [27908644 27923415 28090523 28090553 28090557 28090564 28256701 ] have been 
applied on the local node. 
The release patch string is [18.3.0.0.0].

[root@sharddb3 ~]# crsctl query has releaseversion
Oracle High Availability Services release version on the local node is [18.0.0.0.0]

[root@sharddb3 ~]# crsctl query has softwarepatch
Oracle Clusterware patch level on node sharddb3 is [70732493].

Check the database status using "srvctl" 
========================================
[oracle@sharddb3 ~]$ srvctl status database -d orcl
Database is running.

[oracle@sharddb3 ~]$ srvctl config database -d orcl
Database unique name: orcl
Database name: orcl
Oracle home: /u01/app/oracle/product/18.0.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/ORCL/PARAMETERFILE/spfile.270.982592731
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Disk Groups: DATA
Services:
OSDBA group:
OSOPER group:
Database instance: orcl
[oracle@sharddb3 ~]$

Check the Databases including Pluggable Databases
==================================================
[oracle@sharddb3 ~]$ sqlplus /nolog
SQL*Plus: Release 18.0.0.0.0 - Production on Fri Jul 27 15:26:39 2018
Version 18.3.0.0.0
Copyright (c) 1982, 2018, Oracle.  All rights reserved.

SQL> connect sys/oracle@orcl as sysdba
Connected.

SQL> archive log list;
Database log mode              No Archive Mode
Automatic archival             Disabled
Archive destination            /u01/app/oracle/product/18.0.0/dbhome_1/dbs/arch
Oldest online log sequence     2
Current log sequence           4

SQL> show pdbs
    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 ORCLPDB                        READ WRITE NO

SQL> alter session set container = ORCLPDB;
Session altered.

SQL> show pdbs
    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         3 ORCLPDB                        READ WRITE NO

SQL> connect sys/oracle@192.168.2.60:1521/orclpdb as sysdba
Connected.

SQL> connect sys/oracle@orcl as sysdba
Connected.

Oracle 18c included Oracle Trace File Analyzer (TFA)
====================================================
[root@sharddb3 ~]# sh /u01/app/oracle/product/18.0.0/dbhome_1/root.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/18.0.0/dbhome_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Do you want to setup Oracle Trace File Analyzer (TFA) now ? yes|[no] :
yes
Installing Oracle Trace File Analyzer (TFA).
Log File: /u01/app/oracle/product/18.0.0/dbhome_1/install/
root_sharddb3_2018-07-27_14-06-45-665345917.log

Finished installing Oracle Trace File Analyzer (TFA)
[root@sharddb3 ~]#

Saturday, July 14, 2018

Oracle has released BUG fix for "Oracle Exadata Storage Server Patch (12.2.1.1.7) encountered bugs - 'CELLSRV' Restarting On Exadata x4-2 and Exadata x6-2"


Two weeks back we have encountered BUG after patching PSU APR 2018 including Oracle Exadata Storage Server 12.2.1.1.7 version. 
So CELLSRV started giving some issues (Restarting frequently CELLSRV). I have posted the blog regarding the issue.

Oracle Exadata Storage Server Patch (12.2.1.1.7) encountered bugs - 'CELLSRV' Restarting On Exadata x4-2 and Exadata x6-2
http://yvrk1973.blogspot.com/2018/06/oracle-exadata-storage-server-patch.html

Thanks to Oracle Exadata Development Team. They developed Interim Patch for the fix for 
Oracle Exadata Storage Server Version 12.2.1.1.7 recently.


Patch: p28181789_122117_Linux-x86-64.zip

The zip file contains two rpms and this README.txt.

a.1) cell-12.2.1.1.7_LINUX.X64_180506-1-rpm.bin - 
     This is the base release rpm. This is included in case a rollback is needed.

a.2) cell-12.2.1.1.7.28181789V1_LINUX.X64_180706-1-rpm.bin - 
     This is the interim patch that contains the fix for the bug listed below.  
     This patch will replace existing storage server software.

a.3) ========== Bug fixes or Diagnostics included in this ONEOFF ===========

Bug Fixes: 28181789    ORA-07445: [0000000000000000+0] AFTER UPGRADING CELL TO 12.2.1.1.7


Non-Rolling
============================
Copy the Cell Oneoff RPM File(s) to Target Cell Nodes

   1.a) If the patch has already been downloaded to a database server and unzipped under /tmp, move to 1.b
        Download and unzip the p28181789_122117_Linux-x86_64.zip under /tmp/ on one of the database servers.
       # unzip p28181789_122117_Linux-x86_64.zip -d /tmp

   1.b) Change directory to the location where oneoff cell rpm is located
       # cd /tmp/RPM_patch_12.2.1.1.7

   1.c) Create temporary working directory /var/log/exadatatmp/SAVE_patch_28181789 on cells
       # dcli -l root -g cell_group mkdir -p /var/log/exadatatmp/SAVE_patch_28181789

   1.d) Copy new cell RPM bin file to /var/log/exadatatmp/SAVE_patch_28181789 on cells
       # dcli -l root -g cell_group -f /tmp/RPM_patch_12.2.1.1.7/cell-12.2.1.1.7.28181789V1_LINUX.X64_180706-1-rpm.bin 
        -d /var/log/exadatatmp/SAVE_patch_28181789

2. Shut Down the Cluster
   ------------------------
   If you want to stop cluster from one node, execute the following command from that node.
   [root@dbnode]# crsctl stop cluster -all

   If you want to stop crs on each node, execute following command on each dbnode.
   [root@dbnode]# crsctl stop crs

3. Install the RPM on Each Cell
------------------------------

   3.a) Check if the cell-12.2.1.1.7.28181789V1_LINUX.X64_180706-1-rpm.bin has execute permissions and enable them 
        if needed.

   3.b) Run the following command to apply the cell interim patch.
   
        [root@cell ~]# /var/log/exadatatmp/SAVE_patch_28181789/cell-12.2.1.1.7.28181789V1_LINUX.X64_180706-1-rpm.bin 
                       --doall --force
        ...
        [INFO] Upgrade was successful.
        "Upgrade was successful" is displayed on success.

Note: Refer to section (f) for any known issues during the installation

   3.c) Run the following command to verify the installation.

        [root@cell ~]# rpm -qa | grep ^cell-
        cell-12.2.1.1.7_LINUX.X64_180706-1.x86_64

   3.d) Logout and login the shell to reflect new command path.

       Then, run the following command to verify all services are running.
       [root@cell ~]# cellcli -e list cell attributes cellsrvStatus,msStatus,rsStatus

4. Restart the Rluster 
----------------------
   If you want to start cluster from one node, execute the following command from that node.
   [root@dbnode]# crsctl start cluster -all

   If you want to start crs on each node, execute following command from each node.
   [root@dbnode]# crsctl start crs


5. Remove the Patch File
   -------------------------
   After successful RPM installation, you can remove the temporary patch staging directory to save disk space.

[root@dbnode ~]# dcli -l root -g cell_group rm -rf /var/log/exadatatmp/SAVE_patch_28181789

==============================================================
e) Rolling Back This Interim Patch on Exadata Storage Servers
==============================================================
   To roll back, follow either the rolling or non-rolling steps above
   to install cell-12.2.1.1.7_LINUX.X64_180506-1-rpm.bin.
   This rpm is included in the zip file for this patch.

Note: We can apply the patch "ROLLING" manner also. Please refer for more details MOS with BUG number.

Saturday, July 7, 2018

Downgrading Grid Infrastructure from V12.2.0.1.0 to V12.1.0.2.0

Downgrading Grid Infrastructure from V12.2.0.1.0 to V12.1.0.2.0


We have recently upgraded from Grid Infrastructure from (12.1.0.2.0) to (12.2.0.1.0) in Oracle Exadata x7-2 and for 
some reason we have planned to downgrade from Grid Infrastructure (12.2.0.1.0) to (12.1.0.2.0). 
It is straight forward method and Oracle documentation too good for this process.

Delete the Oracle Grid Infrastructure 12c Release 2 (12.2) Management Database

==================================================================================
dbca -silent -deleteDatabase -sourceDB -MGMTDB

[oracle@rac2-12c ~]$ ps -ef | grep pmon
oracle    5679     1  0 07:09 ?        00:00:00 asm_pmon_+ASM2
oracle    6478     1  0 07:09 ?        00:00:00 apx_pmon_+APX2
oracle    6755     1  0 07:10 ?        00:00:00 mdb_pmon_-MGMTDB
oracle   16873     1  0 07:14 ?        00:00:00 ora_pmon_contdb2
oracle   26100 25673  0 07:25 pts/1    00:00:00 grep pmon

Run “rootcrs.sh –downgrade” to downgrade Oracle Grid Infrastructure on all nodes except the first node.
========================================================================================================
[oracle@rac2-12c ~]$ su - root
Password:
[root@rac2-12c ~]# . oraenv
ORACLE_SID = [root] ? +ASM2
The Oracle base has been set to /u01/app/oracle

[root@rac2-12c ~]# /u01/app/12.2.0/grid/crs/install/rootcrs.sh -downgrade

Using configuration parameter file: /u01/app/12.2.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
/u01/app/oracle/crsdata/rac2-12c/crsconfig/crsdowngrade_rac2-12c_2018-07-03_07-27-42AM.log
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac2-12c'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac2-12c'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'rac2-12c'
CRS-2673: Attempting to stop 'ora.chad' on 'rac1-12c'
CRS-2673: Attempting to stop 'ora.chad' on 'rac2-12c'
CRS-2673: Attempting to stop 'ora.acfs_dg.vol1.acfs' on 'rac2-12c'
CRS-2673: Attempting to stop 'ora.contdb.serv2.svc' on 'rac2-12c'
CRS-2673: Attempting to stop 'ora.qosmserver' on 'rac2-12c'
CRS-2677: Stop of 'ora.contdb.serv2.svc' on 'rac2-12c' succeeded
CRS-2673: Attempting to stop 'ora.contdb.db' on 'rac2-12c'
CRS-2677: Stop of 'ora.acfs_dg.vol1.acfs' on 'rac2-12c' succeeded
CRS-2673: Attempting to stop 'ora.ACFS_DG.VOL1.advm' on 'rac2-12c'
CRS-2677: Stop of 'ora.ACFS_DG.VOL1.advm' on 'rac2-12c' succeeded
CRS-2673: Attempting to stop 'ora.proxy_advm' on 'rac2-12c'
CRS-2677: Stop of 'ora.contdb.db' on 'rac2-12c' succeeded
CRS-2673: Attempting to stop 'ora.ACFS_DG.dg' on 'rac2-12c'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac2-12c'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'rac2-12c'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on 'rac2-12c'
CRS-2673: Attempting to stop 'ora.cvu' on 'rac2-12c'
CRS-2673: Attempting to stop 'xag.gg_1-vip.vip' on 'rac2-12c'
CRS-2677: Stop of 'ora.ACFS_DG.dg' on 'rac2-12c' succeeded
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac2-12c' succeeded
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac2-12c'
CRS-2673: Attempting to stop 'ora.rac2-12c.vip' on 'rac2-12c'
CRS-2677: Stop of 'ora.DATA.dg' on 'rac2-12c' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac2-12c'
CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'rac2-12c' succeeded
CRS-2677: Stop of 'ora.LISTENER_SCAN3.lsnr' on 'rac2-12c' succeeded
CRS-2673: Attempting to stop 'ora.scan2.vip' on 'rac2-12c'
CRS-2673: Attempting to stop 'ora.scan3.vip' on 'rac2-12c'
CRS-2677: Stop of 'ora.cvu' on 'rac2-12c' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac2-12c' succeeded
CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac2-12c'
CRS-2677: Stop of 'xag.gg_1-vip.vip' on 'rac2-12c' succeeded
CRS-2677: Stop of 'ora.chad' on 'rac1-12c' succeeded
CRS-2677: Stop of 'ora.chad' on 'rac2-12c' succeeded
CRS-2673: Attempting to stop 'ora.mgmtdb' on 'rac2-12c'
CRS-2677: Stop of 'ora.scan2.vip' on 'rac2-12c' succeeded
CRS-2677: Stop of 'ora.qosmserver' on 'rac2-12c' succeeded
CRS-2677: Stop of 'ora.rac2-12c.vip' on 'rac2-12c' succeeded
CRS-2677: Stop of 'ora.scan3.vip' on 'rac2-12c' succeeded
CRS-2677: Stop of 'ora.mgmtdb' on 'rac2-12c' succeeded
CRS-2673: Attempting to stop 'ora.MGMTLSNR' on 'rac2-12c'
CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac2-12c' succeeded
CRS-2677: Stop of 'ora.proxy_advm' on 'rac2-12c' succeeded
CRS-2677: Stop of 'ora.MGMTLSNR' on 'rac2-12c' succeeded
CRS-2672: Attempting to start 'ora.MGMTLSNR' on 'rac1-12c'
CRS-2672: Attempting to start 'ora.qosmserver' on 'rac1-12c'
CRS-2672: Attempting to start 'ora.scan2.vip' on 'rac1-12c'
CRS-2672: Attempting to start 'ora.scan3.vip' on 'rac1-12c'
CRS-2672: Attempting to start 'ora.cvu' on 'rac1-12c'
CRS-2672: Attempting to start 'ora.rac2-12c.vip' on 'rac1-12c'
CRS-2672: Attempting to start 'xag.gg_1-vip.vip' on 'rac1-12c'
CRS-2676: Start of 'ora.cvu' on 'rac1-12c' succeeded
CRS-2676: Start of 'ora.scan2.vip' on 'rac1-12c' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on 'rac1-12c'
CRS-2676: Start of 'ora.scan3.vip' on 'rac1-12c' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN3.lsnr' on 'rac1-12c'
CRS-2676: Start of 'ora.rac2-12c.vip' on 'rac1-12c' succeeded
CRS-2676: Start of 'xag.gg_1-vip.vip' on 'rac1-12c' succeeded
CRS-2676: Start of 'ora.MGMTLSNR' on 'rac1-12c' succeeded
CRS-2672: Attempting to start 'ora.mgmtdb' on 'rac1-12c'
CRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'rac1-12c' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN3.lsnr' on 'rac1-12c' succeeded
CRS-2676: Start of 'ora.qosmserver' on 'rac1-12c' succeeded
CRS-2676: Start of 'ora.mgmtdb' on 'rac1-12c' succeeded
CRS-2672: Attempting to start 'ora.chad' on 'rac1-12c'
CRS-2676: Start of 'ora.chad' on 'rac1-12c' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'rac2-12c'
CRS-2677: Stop of 'ora.ons' on 'rac2-12c' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'rac2-12c'
CRS-2677: Stop of 'ora.net1.network' on 'rac2-12c' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac2-12c' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac2-12c' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac2-12c'
CRS-2673: Attempting to stop 'ora.crf' on 'rac2-12c'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac2-12c'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2-12c'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2-12c'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac2-12c' succeeded
CRS-2677: Stop of 'ora.crf' on 'rac2-12c' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'rac2-12c' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac2-12c' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac2-12c' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac2-12c'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac2-12c' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac2-12c'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac2-12c'
CRS-2677: Stop of 'ora.ctssd' on 'rac2-12c' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rac2-12c' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac2-12c'
CRS-2677: Stop of 'ora.cssd' on 'rac2-12c' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2-12c'
CRS-2677: Stop of 'ora.gipcd' on 'rac2-12c' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac2-12c' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2018/07/03 07:30:20 CLSRSC-4006: Removing Oracle Trace File Analyzer (TFA) Collector.
2018/07/03 07:30:47 CLSRSC-4007: Successfully removed Oracle Trace File Analyzer (TFA) Collector.
2018/07/03 07:30:47 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.
2018/07/03 07:31:12 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2018/07/03 07:31:12 CLSRSC-591: successfully downgraded Oracle Clusterware stack on this node
[root@rac2-12c ~]#

Execute the same command in other nodes i.e rac1-12c
==================================================
[oracle@rac1-12c ~]$ su - root
Password:
[root@rac1-12c ~]# . oraenv
ORACLE_SID = [root] ? +ASM1
The Oracle base has been set to /u01/app/oracle
[root@rac1-12c ~]# /u01/app/12.2.0/grid/crs/install/rootcrs.sh -downgrade

Set Oracle Grid Infrastructure 12c Release 1 (12.1) Grid home as the active Oracle Clusterware home
======================================================================================================
$ cd /u01/app/12.1.0.2/grid/oui/bin
$./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=true 
ORACLE_HOME=/u01/app/12.1.0.2/grid "CLUSTER_NODES=rac1-12c,rac2-12c"

Start the 12.1 Oracle Clusterware stack on all nodes.
====================================================
[oracle@rac1-12c ~]$ su - root
Password:
[root@rac1-12c ~]# . oraenv
ORACLE_SID = [root] ? +ASM1
The Oracle base has been set to /u01/app/oracle

[root@rac1-12c ~]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.

[root@rac1-12c ~]# ps -ef | grep pmon
oracle   10415     1  0 07:49 ?        00:00:00 asm_pmon_+ASM1
root     16414  4942  0 07:52 pts/1    00:00:00 grep pmon

[root@rac2-12c bin]# ./crsctl start crs
CRS-4123: Oracle High Availability Services has been started.

[root@rac2-12c bin]# ps -ef | grep pmon
oracle    5059     1  0 07:50 ?        00:00:00 asm_pmon_+ASM2
root      9921 26564  0 07:51 pts/1    00:00:00 grep pmon

On any node, remove the MGMTDB resource as follows:
===================================================
[oracle@rac1-12c ~]$ . oraenv
ORACLE_SID = [contdb2] ? contdb
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@rac1-12c ~]$
[oracle@rac1-12c ~]$
[oracle@rac1-12c ~]$ srvctl remove mgmtdb
Remove the database _mgmtdb? (y/[n]) y
[oracle@rac1-12c ~]$

[oracle@rac1-12c templates]$ ls
MGMTSeed_Database.ctl    MGMTSeed_Database.dfb  mgmtseed_pdb.xml  pdbseed.dfb
MGMTSeed_Database.dbc  mgmtseed_pdb.dfb       New_Database.dbt  pdbseed.xml
[oracle@rac1-12c templates]$ pwd
/u01/app/12.1.0.2/grid/assistants/dbca/templates
[oracle@rac1-12c templates]$
[oracle@rac1-12c templates]$ cd ../../..
[oracle@rac1-12c grid]$ cd bin/

Create the MGMTDB in silent mode using the templates
====================================================
[oracle@rac1-12c bin]$ ./dbca -silent -createDatabase -sid -MGMTDB -createAsContainerDatabase true
-templateName MGMTSeed_Database.dbc 
-gdbName _mgmtdb -storageType ASM -diskGroupName +DATA 
-datafileJarLocation /u01/app/12.1.0.2/grid/assistants/dbca/templates 
-characterset AL32UTF8 -autoGeneratePasswords -skipUserTemplateCheck

Registering database with Oracle Grid Infrastructure
5% complete
Copying database files
7% complete
9% complete
16% complete
23% complete
30% complete
37% complete
41% complete
Creating and starting Oracle instance
43% complete
48% complete
49% complete
50% complete
55% complete
60% complete
61% complete
64% complete
Completing Database Creation
68% complete
79% complete
89% complete
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/_mgmtdb/_mgmtdb1.log" for further details.
[oracle@rac1-12c bin]$

Check the log File:
=====================
[oracle@rac1-12c ~]$ cat /u01/app/oracle/cfgtoollogs/dbca/_mgmtdb/_mgmtdb1.log

+DATA has enough space. Required space is 1566 MB , available space is 55751 MB.
File Validations Successful.

Validation of server pool succeeded.
Registering database with Oracle Grid Infrastructure
DBCA_PROGRESS : 5%
Copying database files
DBCA_PROGRESS : 7%
DBCA_PROGRESS : 9%
DBCA_PROGRESS : 16%
DBCA_PROGRESS : 23%
DBCA_PROGRESS : 30%
DBCA_PROGRESS : 37%
DBCA_PROGRESS : 41%
Creating and starting Oracle instance
DBCA_PROGRESS : 43%
DBCA_PROGRESS : 48%
DBCA_PROGRESS : 49%
DBCA_PROGRESS : 50%
DBCA_PROGRESS : 55%
DBCA_PROGRESS : 60%
DBCA_PROGRESS : 61%
DBCA_PROGRESS : 64%
Completing Database Creation
DBCA_PROGRESS : 68%
DBCA_PROGRESS : 79%
DBCA_PROGRESS : 89%
DBCA_PROGRESS : 100%
Database creation complete. For details check the logfiles at:
 /u01/app/oracle/cfgtoollogs/dbca/_mgmtdb.
Database Information:
Global Database Name:_mgmtdb
System Identifier(SID):-MGMTDB
[oracle@rac1-12c ~]$
[oracle@rac1-12c bin]$ srvctl status MGMTDB
Database is enabled
Instance -MGMTDB is running on node rac1-12c
[oracle@rac1-12c bin]$

Check the MGMTDB instance
==========================
[oracle@rac1-12c bin]$ ps -ef | grep pmon
oracle   10415     1  0 07:49 ?        00:00:00 asm_pmon_+ASM1
oracle   23026     1  0 08:06 ?        00:00:00 mdb_pmon_-MGMTDB
oracle   26089 18664  0 08:14 pts/1    00:00:00 grep pmon
[oracle@rac1-12c bin]$

[oracle@rac1-12c ~]$ su - root
Password:
[root@rac1-12c ~]# . oraenv
ORACLE_SID = [root] ? +ASM1
The Oracle base has been set to /u01/app/oracle

[root@rac1-12c ~]# cd /u01/app/12.1.0.2/grid/bin/
[root@rac1-12c bin]# crsctl modify res ora.crf -attr ENABLED=1 -init

[root@rac1-12c bin]# crsctl start res ora.crf -init
CRS-2672: Attempting to start 'ora.crf' on 'rac1-12c'
CRS-2676: Start of 'ora.crf' on 'rac1-12c' succeeded
[root@rac1-12c bin]#

[root@rac2-12c ~]# crsctl modify res ora.crf -attr ENABLED=1 -init
[root@rac2-12c ~]# crsctl start res ora.crf -init
CRS-2672: Attempting to start 'ora.crf' on 'rac2-12c'
CRS-2676: Start of 'ora.crf' on 'rac2-12c' succeeded
[root@rac2-12c ~]#


Check the GI version 
=====================
[root@rac1-12c bin]# crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.2.0]

[root@rac1-12c bin]# crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [12.1.0.2.0]
[root@rac1-12c bin]#
[root@rac1-12c bin]# exit
logout

[oracle@rac1-12c ~]$ . oraenv
ORACLE_SID = [contdb] ? contdb
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@rac1-12c ~]$ ps -ef | grep pmon
oracle   10415     1  0 07:49 ?        00:00:00 asm_pmon_+ASM1
oracle   23026     1  0 08:06 ?        00:00:00 mdb_pmon_-MGMTDB
oracle   27904 18664  0 08:19 pts/1    00:00:00 grep pmon

[oracle@rac1-12c ~]$ srvctl status database -d contdb
Instance contdb1 is not running on node rac1-12c
Instance contdb2 is not running on node rac2-12c
[oracle@rac1-12c ~]$

[oracle@rac1-12c ~]$ srvctl start database -d contdb

[oracle@rac1-12c ~]$ srvctl status database -d contdb
Instance contdb1 is running on node rac1-12c
Instance contdb2 is running on node rac2-12c
[oracle@rac1-12c ~]$

[oracle@rac2-12c ~]$ . oraenv
ORACLE_SID = [primdb2] ? contdb
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@rac2-12c ~]$
[oracle@rac2-12c ~]$ srvctl status database -d contdb
Instance contdb1 is running on node rac1-12c
Instance contdb2 is running on node rac2-12c
[oracle@rac2-12c ~]$

[oracle@rac2-12c ~]$ ps -ef | grep pmon
oracle     547 25673  0 08:56 pts/1    00:00:00 grep pmon
oracle    5059     1  0 07:50 ?        00:00:00 asm_pmon_+ASM2
oracle   21595     1  0 08:20 ?        00:00:00 ora_pmon_contdb2
[oracle@rac2-12c ~]$

Friday, July 6, 2018

Online Operations in Oracle 11g/ 12c (12.1) /12c (12.2) / 18c Databases


Online Operations in Oracle 11g/12c (12.1)/12c (12.2)/18c Databases

Oracle 11g (11.2) & Prior
--------------------------

1. Create index online
2. rebuild index online
3. rebuild index partition online
4. Add Column
5. Add Constraint enable novalidate

Oracle 12c (12.1)
------------------

1. Online move partition
2. Drop index online
3. Set unused column online
4. alter column visible/invisible
5. alter index unusable online
6. alter index visible/invisible 
7. alter index parallel/noparallel

Oracle 12c (12.2)
------------------

1. Alter table  move online for non-partitioned tables
2. Alter table  from non-partitioned to partitioned online
3. Alter table  split partition online
4. Create table for exchange (usable for online partition exchange)
5. Move/merge/split partition maintenance operations can now do data filtering

Oracle 18c
-----------

1. Alter table modify partitioned table to a different partitioning method
2. Alter table merge partition/subpartition online

Saturday, June 23, 2018

Oracle has changed the License Documentation that Standby CDB can be open in Read Only without using Active Data Guard license


Oracle has changed the License Documentation that Standby CDB can be open in Read Only without using 
Active Data Guard (ADG) license

link to the official Active Data Guard License Document

https://docs.oracle.com/en/database/oracle/oracle-database/18/dblic/Licensing-Information.html#GUID-AB56CEE3-955E-4E56-8B44-6075E889C283

Oracle Exadata Storage Server Patch (12.2.1.1.7) encountered bugs - 'CELLSRV' Restarting On Exadata x4-2 and Exadata x6-2


Oracle Exadata Storage Server Patch (12.2.1.1.7) encountered bugs - 'CELLSRV' Restarting

We have encountered bugs ('CELLSRV' Process restarted couple of times across the Exadata Storage Servers) after applying 
Oracle Exadata Storage Server Patch (12.2.1.1.7) on Oracle Exadata x4-2 (Quarter Rack) and Oracle Exadata x6-2 (Half Rack).

Finding INCIDENT Info in Exadata Storage Server 

Login to Oracle Exadata Storage Server and use "ADRCI"



Viewing Alertlog file in Exadata Storage Server



Raised SR and confirmed with following BUGs in Oracle Exadata Storage Server (12.2.1.1.7)



Required ACFS Driver Updates

Install updated ACFS drivers before updating Exadata database servers to 18.1.5.0.0 or 12.2.1.1.7. Note the following:


1. Updated ACFS drivers are needed for the most efficient CVE-2017-5715 mitigation to be in place once 18.1.5.0.0 or 
   12.2.1.1.7 is installed.
2. Updated ACFS drivers are needed whether or not ACFS is in use.
3. In an OVM configuration dom0 may be updated to 18.1.5.0.0 or 12.2.1.1.7 before the ACFS drivers are updated in domU.
4. Updated drivers are delivered via an ACFS patch that is installed in the grid infrastructure home. 
   
GI Home guidelines - The following patches are available at publication time:

1. Grid infrastructure 18.2 and 12.2.0.1 - Patch 27463879
2. Grid infrastructure 12.1.0.2.180417 - Patch 27942938 (merge patch containing fixes for bug 27463879 and bug 23312691)
3. Grid infrastructure 12.1.0.2.180116 - Patch 27942821 (merge patch containing fixes for bug 27463879 and bug 23312691)

How to check ACFS drivers installed based on Grid Infrastructure Version

1. Login as 'grid' user and check the patches


2. Check "ACFS" drivers Information on Oracle Exadata Compute Nodes


Note: Please refer for more details : Oracle Exadata Database Machine Patch Availability Document for CVE-2017-5715, 
CVE-2017-5753, and CVE-2017-5754 (Doc ID 2356385.1)

Hope helps.....

Saturday, May 26, 2018

Oracle Exadata x7-2 Eighth Rack Improvements


Oracle Exadata x7-2 Eighth Rack Improvements

1. Used 10TB HHDs - 53 TB with High Redundancy (67 TB with Normal Redundancy)
2. Cores Per CPU/Server - 24/48
3. DRAM Per Server - 384 GB
4. Infiband (Internal) - 40 Gb/s
5. Ethernet (External) - 25 GigE
6. All-Flash per server - 51.2 TB
7. Flash Cache per server - 25.6 TB
8. Disk capacity per server - 120 TB
9. SQL offload cores per server - 20
10. Two separate internal HDDs for OS and Cell Software (M2 Disks)

Friday, May 4, 2018

AWR Snapshots and Reports - PDB level in Oracle 12c R2 (12.2.0.1.0)


AWR Snapshots and Reports - PDB level in Oracle 12c R2 (12.2.0.1.0)


1. AWR Snapshots and reports can be created only at the container database (CDB) level in 
   Oracle 12c R1 (12.1.0.1.0 / 12.1.0.2.0)
2. AWR Snapshots and reports can be created at the container database (CDB) level as well as pluggable database (PDB) level 
   in Oracle 12c R2 (12.2.0.1.0)
3. By default, AWR Snapshots and reports can be generated only at the container database (CDB) level
4. If you want to generate AWR Snapshots and reports at PDB level, set the following parameter 
   "awr_pdb_autoflush_enabled=TRUE" (by default FALSE)

oracle@exaserver01 ~]$ sqlplus / as sysdba


SQL*Plus: Release 12.2.0.1.0 Production on Thu May 3 14:29:18 2018
Copyright (c) 1982, 2016, Oracle.  All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> show con_name

CON_NAME
----------
CDB$ROOT

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 ORCLPDB                        READ WRITE NO

SQL> alter session set container=ORCLPDB;
Session altered.

SQL> show con_name

CON_NAME
--------
ORCLPDB

SQL> show user
USER is "SYS"

SQL> show parameter awr_pdb_autoflush_enabled

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
awr_pdb_autoflush_enabled            boolean     FALSE

SQL> alter system set awr_pdb_autoflush_enabled=TRUE;
System altered.

SQL> show parameter awr_pdb_autoflush_enabled

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
awr_pdb_autoflush_enabled            boolean     TRUE


SQL> set lines 100
SQL> select * from cdb_hist_wr_control;

      DBID SNAP_INTERVAL
---------- ---------------------------------------------------------------------------
RETENTION                                                                   TOPNSQL        CON_ID
--------------------------------------------------------------------------- ---------- ----------
1826807715 +40150 00:01:00.0
+00008 00:00:00.0                                                           DEFAULT             3

SQL> execute dbms_workload_repository.modify_snapshot_settings(interval => 60);
PL/SQL procedure successfully completed.

SQL> select * from cdb_hist_wr_control;

      DBID SNAP_INTERVAL
---------- ---------------------------------------------------------------------------
RETENTION                                                                   TOPNSQL        CON_ID
--------------------------------------------------------------------------- ---------- ----------
1826807715 +00000 01:00:00.0
+00008 00:00:00.0                                                           DEFAULT             3

SQL> show parameter AWR_SNAPSHOT_TIME_OFFSET

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
awr_snapshot_time_offset             integer     0

SQL> alter system set AWR_SNAPSHOT_TIME_OFFSET=1000000 scope=both;
alter system set AWR_SNAPSHOT_TIME_OFFSET=1000000 scope=both
*
ERROR at line 1:
ORA-65040: operation not allowed from within a pluggable database


SQL> alter session set container=CDB$ROOT;
Session altered.

SQL> SHOW CON_NAME

CON_NAME
------------------------------
CDB$ROOT

SQL> show user
USER is "SYS"

SQL> show parameter AWR_SNAPSHOT_TIME_OFFSET

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
awr_snapshot_time_offset             integer     0

SQL> alter system set AWR_SNAPSHOT_TIME_OFFSET=1000000 scope=both;
System altered.

SQL> show parameter AWR_SNAPSHOT_TIME_OFFSET

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
awr_snapshot_time_offset             integer     1000000

SQL> select * from awr_pdb_snapshot;


AWR Report at PDB level:






Note: Pluggable Database (PDB) Snapshot ID's can be different from Container Database (CDB)
Snapshot ID's when you generate PDB Snapshot ID's manually.

Friday, March 30, 2018

Virtual IP Addresses In Oracle Exadata x6/x7


Virtual IP Addresses In Oracle Exadata

Note: If not able to find VIPs in /etc/hosts, check the below you can find out VIPs in Oracle Exadata :)


$ srvctl status vip -n dbserver01
VIP dbserver01-vip.abc.com is enabled
VIP dbserver01-vip.abc.com is running on node: dbserver01
[ oracle@dbserver01.abc.com : Thu Mar 29, 10:06 AM : /home/oracle ]

$ srvctl status vip -n dbserver02
VIP dbserver02-vip.abc.com is enabled
VIP dbserver02-vip.abc.com is running on node: dbserver02
[ oracle@dbserver01.abc.com : Thu Mar 29, 10:06 AM : /home/oracle ]

$ srvctl status vip -n dbserver03
VIP dbserver03-vip.abc.com is enabled
VIP dbserver03-vip.abc.com is running on node: dbserver03
[ oracle@dbserver01.abc.com : Thu Mar 29, 10:06 AM : /home/oracle ]

$ srvctl status vip -n dbserver04
VIP dbserver04-vip.abc.com is enabled
VIP dbserver04-vip.abc.com is running on node: dbserver04
[ oracle@dbserver01.abc.com : Thu Mar 29, 10:06 AM : /home/oracle ]
$

[ oracle@dbserver01.abc.com : Thu Mar 29, 10:06 AM : /home/oracle ]
$ nslookup dbserver01-vip
Server:         17.2.3.20
Address:        17.2.3.20#53
Name:   dbserver01-vip.abc.com
Address: 10.5.3.77

[ oracle@dbserver01.abc.com : Thu Mar 29, 10:06 AM : /home/oracle ]
$ nslookup dbserver02-vip
Server:         17.2.3.20
Address:        17.2.3.20#53
Name:   dbserver02-vip.abc.com
Address: 10.5.3.78

[ oracle@dbserver01.abc.com : Thu Mar 29, 10:06 AM : /home/oracle ]
$ nslookup dbserver03-vip
Server:         17.2.3.20
Address:        17.2.3.20#53
Name:   dbserver03-vip.abc.com
Address: 10.5.3.79

[ oracle@dbserver01.abc.com : Thu Mar 29, 10:06 AM : /home/oracle ]
$ nslookup dbserver04-vip
Server:         17.2.3.20
Address:        17.2.3.20#53
Name:   dbserver04-vip.abc.com
Address: 10.5.3.80

RAC Service Not Failover during Instance Shutdown Using srvctl stop instance with Abort and Force option in 11.2.0.4/12.1.0.2/12.2.0.1


RAC Service Not Failover during Instance Shutdown Using srvctl stop instance with Abort and 
Force option in 11.2.0.4/12.1.0.2/12.2.0.1

Check the GRID version

[grid@dbserver01 ~]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [12.2.0.1.0]
[grid@dbserver01 ~]$

[grid@dbserver01 ~]$ crsctl query crs softwareversion
Oracle Clusterware version on node [dbserver01] is [12.2.0.1.0]
[grid@dbserver01 ~]$

$ su - oracle
[oracle@dbserver01 ~]$ ps -ef | grep pmon
oracle    26358      1  0 Mar29 ?        00:00:04 ora_pmon_dbm011
grid      64858      1  0 Feb28 ?        00:01:54 asm_pmon_+ASM1
oracle   289299 289250  0 12:32 pts/1    00:00:00 grep pmon
[oracle@dbserver01 ~]$

[oracle@dbserver01 ~]$ . oraenv
ORACLE_SID = [oracle] ? dbm011
The Oracle base has been set to /u01/app/oracle
[oracle@dbserver01 ~]$ sqlplus / as sysdba
SQL*Plus: Release 12.2.0.1.0 Production on Fri Mar 30 12:32:34 2018
Copyright (c) 1982, 2016, Oracle.  All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

Creating service with PREFERRED in both the instances. 

[oracle@dbserver01 ~]$ srvctl add service -d dbm01 -s serv5 -r dbm011,dbm012 -tafpolicy BASIC -failovertype SELECT 
-failovermethod BASIC -failoverretry 2 -failoverdelay 5

[oracle@dbserver01 ~]$ srvctl status service -d dbm01
Service serv5 is not running.
[oracle@dbserver01 ~]$

[oracle@dbserver01 ~]$ srvctl start service -d dbm01 -s serv5
[oracle@dbserver01 ~]$

[oracle@dbserver01 ~]$ srvctl status service -d dbm01 -s serv5
Service serv5 is running on instance(s) dbm011,dbm012
[oracle@dbserver01 ~]$

[oracle@dbserver01 ~]$ srvctl config service -d dbm01 -s serv5
Service name: serv5
Server pool:
Cardinality: 2
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Global: false
Commit Outcome: false
Failover type: SELECT
Failover method: BASIC
TAF failover retries: 2
TAF failover delay: 5
Failover restore: NONE
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: BASIC
Edition:
Pluggable database name:
Maximum lag time: ANY
SQL Translation Profile:
Retention: 86400 seconds
Replay Initiation Time: 300 seconds
Drain timeout:
Stop option:
Session State Consistency: DYNAMIC
GSM Flags: 0
Service is enabled
Preferred instances: dbm011,dbm012
Available instances:
CSS critical: no
[oracle@dbserver01 ~]$

[oracle@dbserver01 ~]$ srvctl stop instance -d dbm01 -i dbm011 -f
[oracle@dbserver01 ~]$

[oracle@dbserver01 ~]$ srvctl status service -d dbm01
Service serv5 is running on instance(s) dbm012
[oracle@dbserver01 ~]$

[oracle@dbserver01 ~]$ srvctl config service -d dbm01 -s serv5
Service name: serv5
Server pool:
Cardinality: 2
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Global: false
Commit Outcome: false
Failover type: SELECT
Failover method: BASIC
TAF failover retries: 2
TAF failover delay: 5
Failover restore: NONE
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: BASIC
Edition:
Pluggable database name:
Maximum lag time: ANY
SQL Translation Profile:
Retention: 86400 seconds
Replay Initiation Time: 300 seconds
Drain timeout:
Stop option:
Session State Consistency: DYNAMIC
GSM Flags: 0
Service is enabled
Preferred instances: dbm011,dbm012
Available instances:
CSS critical: no

[oracle@dbserver01 ~]$ srvctl start instance -d dbm01 -i dbm011

[oracle@dbserver01 ~]$ srvctl status service -d dbm01
Service serv5 is running on instance(s) dbm011,dbm012
[oracle@dbserver01 ~]$

[oracle@dbserver01 ~]$ srvctl status service -d dbm01 -s serv5
Service serv5 is running on instance(s) dbm011,dbm012
[oracle@dbserver01 ~]$

Creating service with PREFERRED and AVAILABLE in the instances. 

[oracle@dbserver01 ~]$ srvctl add service -d dbm01 -s serv6 -r dbm011 -a dbm012 -tafpolicy BASIC -failovertype SELECT 
-failovermethod BASIC -failoverretry 2 -failoverdelay 5
[oracle@dbserver01 ~]$

[oracle@dbserver01 ~]$ srvctl status service -d dbm01 -s serv6
Service serv6 is not running.
[oracle@dbserver01 ~]$

[oracle@dbserver01 ~]$ srvctl start service -d dbm01 -s serv6
[oracle@dbserver01 ~]$

[oracle@dbserver01 ~]$ srvctl status service -d dbm01 -s serv6
Service serv6 is running on instance(s) dbm011
[oracle@dbserver01 ~]$

[oracle@dbserver01 ~]$ srvctl status service -d dbm01 -s serv6
Service serv6 is running on instance(s) dbm011

[oracle@dbserver01 ~]$ srvctl config service -d dbm01 -s serv6
Service name: serv6
Server pool:
Cardinality: 1
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Global: false
Commit Outcome: false
Failover type: SELECT
Failover method: BASIC
TAF failover retries: 2
TAF failover delay: 5
Failover restore: NONE
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: BASIC
Edition:
Pluggable database name:
Maximum lag time: ANY
SQL Translation Profile:
Retention: 86400 seconds
Replay Initiation Time: 300 seconds
Drain timeout:
Stop option:
Session State Consistency: DYNAMIC
GSM Flags: 0
Service is enabled
Preferred instances: dbm011
Available instances: dbm012
CSS critical: no
[oracle@dbserver01 ~]$

[oracle@dbserver01 ~]$ srvctl stop instance -d dbm01 -i dbm011 -f

[oracle@dbserver01 ~]$ srvctl status service -d dbm01 -s serv6
Service serv6 is not running.

[oracle@dbserver01 ~]$ srvctl start instance -d dbm01 -i dbm011

[oracle@dbserver01 ~]$ srvctl status service -d dbm01 -s serv6
Service serv6 is running on instance(s) dbm011

Please refer MOS: 

1. Services Are Not Failing Over To Node2 Automatically When First Node1 Is Down And Vice Versa (Doc ID 2177143.1)
2. The Bug 22472054 is seen fixed in GI 12.2 version
3. Apply the patch for bug 22472054.

Sunday, February 25, 2018

Moving Pluggable Database (PDB) between the different Container Database (CDB) in Different Servers with ASM Storage


Moving Pluggable Database (PDB) between the different Container Database (CDB) with ASM Storage


Moving Pluggable Database (PDB) between the different Container Database (CDB) with ASM Storage

==========Source Cluster RAC1 and RAC2==============================>

Source Cluster: RAC1 and RAC2
Source Container Database: CONTDB
Source Pluggable Database: CONPDB3


Target Cluster: RAC3 and RAC4
Target Container Database: CONTDB
Target Pluggable Database: CONPDB3



[oracle@rac1-12c ~]$ . oraenv
ORACLE_SID = [primdb1] ? contdb
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@rac1-12c ~]$ sqlplus /nolog
SQL*Plus: Release 12.1.0.2.0 Production on Sun Feb 25 06:10:56 2018
Copyright (c) 1982, 2014, Oracle.  All rights reserved.
SQL> connect sys/oracle@contdb as sysdba
Connected.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 CONPDB1                        READ WRITE NO
         4 CONPDB2                        READ WRITE NO
         5 CONPDB3                        READ WRITE NO
SQL> alter session set container=CONPDB3;
Session altered.

SQL> select file_name from dba_data_files;

FILE_NAME
--------------------------------------------------------------------------------
+DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/system.335.968994861
+DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/sysaux.334.968994861
+DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/users.332.968994889

SQL> connect sys/oracle@contdb as sysdba
Connected.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 CONPDB1                        READ WRITE NO
         4 CONPDB2                        READ WRITE NO
         5 CONPDB3                        READ WRITE NO


SQL> alter pluggable database conpdb3 close immediate instances=all;
Pluggable database altered.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 CONPDB1                        READ WRITE NO
         4 CONPDB2                        READ WRITE NO
         5 CONPDB3                        MOUNTED
SQL> select name ,open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
CONPDB1                        READ WRITE
CONPDB2                        READ WRITE
CONPDB3                        MOUNTED

SQL> alter pluggable database conpdb3 unplug into '/u01/app/oracle/CONPDB3_NEW.xml';
Pluggable database altered.

SQL> exit

[oracle@rac1-12c ~]$ . oraenv
ORACLE_SID = [contdb] ? contdb1
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@rac1-12c ~]$ rman target /
Recovery Manager: Release 12.1.0.2.0 - Production on Sun Feb 25 06:16:04 2018
Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
connected to target database: CONTDB (DBID=1349816764)

RMAN> BACKUP FOR TRANSPORT AS COMPRESSED BACKUPSET FILESPERSET=1024 
PLUGGABLE DATABASE 'CONPDB3' FORMAT '/u01/app/oracle/CONPDB3.dfb';

Starting backup at 25-FEB-18
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=82 instance=contdb1 device type=DISK
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00019 name=+DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/sysaux.334.968994861
input datafile file number=00018 name=+DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/system.335.968994861
input datafile file number=00020 name=+DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/users.332.968994889
channel ORA_DISK_1: starting piece 1 at 25-FEB-18
channel ORA_DISK_1: finished piece 1 at 25-FEB-18
piece handle=/u01/app/oracle/CONPDB3.dfb tag=TAG20180225T061635 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25
Finished backup at 25-FEB-18

RMAN> exit


[oracle@rac1-12c ~]$ cd /u01/app/oracle/
[oracle@rac1-12c oracle]$ ls -lrth
total 148M
drwxrwxr-x.  3 oracle oinstall 4.0K Jun 23  2017 product
drwxr-xr-x   4 root   root     4.0K Jun 23  2017 tfa
drwxr-xr-x   3 oracle oinstall 4.0K Jun 23  2017 rac1-12c
drwxr-xr-x   3 oracle oinstall 4.0K Jun 24  2017 log
drwxr-xr-x   2 oracle oinstall 4.0K Jun 24  2017 checkpoints
drwxrwxr-x   5 oracle oinstall 4.0K Jun 24  2017 crsdata
drwxr-x---   7 oracle oinstall 4.0K Jun 29  2017 cfgtoollogs
drwxrwxr-x  19 oracle oinstall 4.0K Oct 25 22:48 diag
drwxr-x---   8 oracle oinstall 4.0K Nov  8 04:53 audit
drwxr-x---   7 oracle oinstall 4.0K Feb 25 05:13 admin
-rw-r--r--   1 oracle vboxsf   5.4K Feb 25 05:31 conpdb3.xml
-rw-r--r--   1 oracle vboxsf   5.4K Feb 25 06:13 CONPDB3_NEW.xml
-rw-r-----   1 oracle vboxsf   148M Feb 25 06:16 CONPDB3.dfb

[oracle@rac1-12c oracle]$ scp CONPDB3_NEW.xml oracle@192.168.2.201:/u01/app/oracle/
oracle@192.168.2.201's password:
CONPDB3_NEW.xml                                   100% 5525     5.4KB/s   00:00
[oracle@rac1-12c oracle]$ scp CONPDB3.dfb oracle@192.168.2.201:/u01/app/oracle/
oracle@192.168.2.201's password:
CONPDB3.dfb                                       100%  148MB  73.9MB/s   00:02
[oracle@rac1-12c oracle]$ cd


[oracle@rac1-12c ~]$ . oraenv
ORACLE_SID = [contdb1] ? contdb
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@rac1-12c ~]$ sqlplus /nolog
SQL*Plus: Release 12.1.0.2.0 Production on Sun Feb 25 09:43:06 2018
Copyright (c) 1982, 2014, Oracle.  All rights reserved.

SQL> connect sys/oracle@contdb as sysdba
Connected.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 CONPDB1                        READ WRITE NO
         4 CONPDB2                        READ WRITE NO
         5 CONPDB3                        MOUNTED

SQL> col PATCH_ID format 999999999
SQL> col PATCH_UID format 99999999
SQL> col VERSION format a10
SQL> col STATUS format a15
SQL> col DESCRIPTION format a80
SQL> set lines 200 pages 1000
SQL> select PATCH_ID, PATCH_UID, VERSION, STATUS, DESCRIPTION 
from DBA_REGISTRY_SQLPATCH order by BUNDLE_SERIES;
no rows selected

SQL> !
[oracle@rac1-12c ~]$ cd /u01/app/oracle/product/12.1.0.2/db_1/OPatch/
[oracle@rac1-12c OPatch]$ ./opatch lspatches
There are no Interim patches installed in this Oracle Home "/u01/app/oracle/product/12.1.0.2/db_1".

OPatch succeeded.
[oracle@rac1-12c OPatch]$

===========================Target Cluster RAC3 and RAC4=================>

[oracle@rac3-12c ~]$ . oraenv
ORACLE_SID = [orcldb1] ? contdb
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@rac3-12c ~]$ sqlplus /nolog
SQL*Plus: Release 12.1.0.2.0 Production on Sun Feb 25 06:22:16 2018
Copyright (c) 1982, 2014, Oracle.  All rights reserved.

SQL> connect sys/oracle@contdb as sysdba
Connected.
SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 CONPDB1                        READ WRITE NO
         4 CONPDB2                        READ WRITE NO
SQL> exit


[oracle@rac3-12c ~]$ . oraenv
ORACLE_SID = [+ASM1] ? contdb1
ORACLE_HOME = [/home/oracle] ? /u01/app/oracle/product/12.1.0.2/db_1
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@rac3-12c ~]$ rman target /
Recovery Manager: Release 12.1.0.2.0 - Production on Sun Feb 25 06:51:58 2018
Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
connected to target database: CONTDB (DBID=1349963590)

RMAN> run
{
set command id  to 'CONPDB3';
RESTORE FOREIGN DATAFILE  18 TO NEW, 19 TO NEW, 
20 TO NEW FROM BACKUPSET '/u01/app/oracle/CONPDB3.dfb';
};
2> 3> 4> 5>
executing command: SET COMMAND ID

Starting restore at 25-FEB-18
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=97 instance=contdb1 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring foreign file 00018
channel ORA_DISK_1: restoring foreign file 00019
channel ORA_DISK_1: restoring foreign file 00020
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/CONPDB3.dfb
channel ORA_DISK_1: restoring foreign file 18 to +DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/system.350.969000723
channel ORA_DISK_1: restoring foreign file 19 to +DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/sysaux.349.969000723
channel ORA_DISK_1: restoring foreign file 20 to +DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/users.351.969000723
channel ORA_DISK_1: foreign piece handle=/u01/app/oracle/CONPDB3.dfb
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:26
Finished restore at 25-FEB-18
RMAN> exit

[oracle@rac3-12c ~]$ . oraenv
ORACLE_SID = [contdb1] ? +ASM1
The Oracle base remains unchanged with value /u01/app/oracle

ASMCMD> cd DATA
ASMCMD> cd CONTDB
ASMCMD> ls
5D900B1A1E953788E053C902A8C0836A/
5D902973DAA94A24E053C902A8C0F831/
5D902BDD14014BFCE053C902A8C0636A/
65FEDAA33CD86078E0536502A8C07E3C/
ARCHIVELOG/
AUTOBACKUP/
CONTROLFILE/
DATAFILE/
FD9AC20F64D244D7E043B6A9E80A2F2F/
ONLINELOG/
PARAMETERFILE/
PASSWORD/
TEMPFILE/
ASMCMD> cd 65FEDAA33CD86078E0536502A8C07E3C/

ASMCMD> ls
DATAFILE/
ASMCMD> cd datafile
ASMCMD> ls
SYSAUX.349.969000723
SYSTEM.350.969000723
USERS.351.969000723
ASMCMD> ls -l
Type      Redund  Striped  Time             Sys  Name
DATAFILE  UNPROT  COARSE   FEB 25 06:00:00  Y    SYSAUX.349.969000723
DATAFILE  UNPROT  COARSE   FEB 25 06:00:00  Y    SYSTEM.350.969000723
DATAFILE  UNPROT  COARSE   FEB 25 06:00:00  Y    USERS.351.969000723
ASMCMD> exit

[oracle@rac3-12c ~]$ . oraenv
ORACLE_SID = [+ASM1] ? contdb
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@rac3-12c ~]$ sqlplus /nolog
SQL*Plus: Release 12.1.0.2.0 Production on Sun Feb 25 06:55:00 2018
Copyright (c) 1982, 2014, Oracle.  All rights reserved.

SQL> connect sys/oracle@contdb as sysdba
Connected.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 CONPDB1                        READ WRITE NO
         4 CONPDB2                        READ WRITE NO
SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

[oracle@rac3-12c ~]$ . oraenv
ORACLE_SID = [contdb] ? contdb1
ORACLE_HOME = [/home/oracle] ? /u01/app/oracle/product/12.1.0.2/db_1
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@rac3-12c ~]$ rman target /
Recovery Manager: Release 12.1.0.2.0 - Production on Sun Feb 25 06:58:45 2018
Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
connected to target database: CONTDB (DBID=1349963590)

RMAN> change datafilecopy '+DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/system.350.969000723',
2> '+DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/sysaux.349.969000723',
3> '+DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/users.351.969000723' uncatalog;

using target database control file instead of recovery catalog
uncataloged datafile copy
datafile copy file name=+DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/
system.350.969000723 RECID=7 STAMP=969000740
uncataloged datafile copy
datafile copy file name=+DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/
sysaux.349.969000723 RECID=8 STAMP=969000748
uncataloged datafile copy
datafile copy file name=+DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/
users.351.969000723 RECID=6 STAMP=969000724
Uncataloged 3 objects

RMAN> exit
Recovery Manager complete.

[oracle@rac3-12c ~]$ cd /u01/app/oracle/
[oracle@rac3-12c oracle]$ cat CONPDB3_NEW.xml | grep path
  +DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/system.335.968994861
  +DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/sysaux.334.968994861
  +DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/TEMPFILE/temp.333.968997067
  +DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/users.332.968994889
[oracle@rac3-12c oracle]$

ASMCMD> pwd
+data/contdb/65FEDAA33CD86078E0536502A8C07E3C/datafile

ASMCMD> exit
[oracle@rac3-12c oracle]$ cd

[oracle@rac3-12c ~]$ . oraenv
ORACLE_SID = [+ASM1] ? contdb1
ORACLE_HOME = [/home/oracle] ? /u01/app/oracle/product/12.1.0.2/db_1
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@rac3-12c ~]$ sqlplus /nolog
SQL*Plus: Release 12.1.0.2.0 Production on Sun Feb 25 07:04:54 2018
Copyright (c) 1982, 2014, Oracle.  All rights reserved.

SQL> connect sys/oracle@contdb as sysdba
Connected.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 CONPDB1                        READ WRITE NO
         4 CONPDB2                        READ WRITE NO

SQL> create pluggable database CONPDB3 as clone  using '/u01/app/oracle/CONPDB3_NEW.xml'
  2  source_file_name_convert = (
  3  '+DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/system.335.968994861',
     '+DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/system.350.969000723',
  4  '+DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/sysaux.334.968994861',
     '+DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/sysaux.349.969000723',
  5  '+DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/users.332.968994889',
     '+DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/users.351.969000723',
  6  '+DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/TEMPFILE/temp.333.968997067',
     '+DATA/CONTDB/65FEDAA33CD86078E0536502A8C07E3C/DATAFILE/conpdb3_temp01.dbf')
  7  file_name_convert=NONE
  8  NOCOPY;

Pluggable database created.

SQL> select name,open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
CONPDB1                        READ WRITE
CONPDB2                        READ WRITE
CONPDB3                        MOUNTED

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 CONPDB1                        READ WRITE NO
         4 CONPDB2                        READ WRITE NO
         5 CONPDB3                        MOUNTED
SQL>
SQL> alter pluggable database CONPDB3 open instances=all;

Warning: PDB altered with errors.

[oracle@rac3-12c ~]$ . oraenv
ORACLE_SID = [contdb] ?
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@rac3-12c ~]$ sqlplus /nolog
SQL*Plus: Release 12.1.0.2.0 Production on Sun Feb 25 10:54:01 2018
Copyright (c) 1982, 2014, Oracle.  All rights reserved.

SQL> connect sys/oracle@contdb as sysdba
Connected.
SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 CONPDB1                        READ WRITE NO
         4 CONPDB2                        READ WRITE NO
         5 CONPDB3                        READ WRITE YES

SQL> select cause, type, message from PDB_PLUG_IN_VIOLATIONS where name = 'CONPDB3';

CAUSE                                                            TYPE
---------------------------------------------------------------- ---------
MESSAGE
---
SQL Patch                                                        ERROR
PSU bundle patch 171017 (DATABASE PATCH SET UPDATE 12.1.0.2.171017): Installed in the CDB but not in the PDB.

[oracle@rac3-12c ~]$ . oraenv
ORACLE_SID = [contdb] ? contdb1
ORACLE_HOME = [/home/oracle] ? /u01/app/oracle/product/12.1.0.2/db_1
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@rac3-12c ~]$ cd /u01/app/oracle/product/12.1.0.2/db_1/OPatch/

[oracle@rac3-12c OPatch]$ ./datapatch -verbose
SQL Patching tool version 12.1.0.2.0 Production on Sun Feb 25 10:56:36 2018
Copyright (c) 2012, 2016, Oracle.  All rights reserved.

Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/
sqlpatch_11332_2018_02_25_10_56_36/sqlpatch_invocation.log

Connecting to database...OK
Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)
Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of SQL patches:
Bundle series PSU:
  Not installed in the binary registry and ID 171017 in PDB CDB$ROOT, ID 171017 in PDB PDB$SEED, 
  ID 171017 in PDB CONPDB1, ID 171017 in PDB CONPDB2

Adding patches to installation queue and performing prereq checks...
Installation queue:
  For the following PDBs: CDB$ROOT PDB$SEED CONPDB1 CONPDB2
    The following patches will be rolled back:
      26713565 (DATABASE PATCH SET UPDATE 12.1.0.2.171017)
    Nothing to apply
  For the following PDBs: CONPDB3
    Nothing to roll back
    Nothing to apply

Installing patches...
Patch installation complete.  Total patches installed: 4

Validating logfiles...
Patch 26713565 rollback (pdb CDB$ROOT): SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/26713565/21602269/
26713565_rollback_CONTDB_CDBROOT_2018Feb25_10_57_32.log (no errors)
Patch 26713565 rollback (pdb PDB$SEED): SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/26713565/21602269/
26713565_rollback_CONTDB_PDBSEED_2018Feb25_10_58_23.log (no errors)
Patch 26713565 rollback (pdb CONPDB1): SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/26713565/21602269/
26713565_rollback_CONTDB_CONPDB1_2018Feb25_10_58_23.log (no errors)
Patch 26713565 rollback (pdb CONPDB2): SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/26713565/21602269/
26713565_rollback_CONTDB_CONPDB2_2018Feb25_10_58_23.log (no errors)
SQL Patching tool complete on Sun Feb 25 10:59:37 2018
[oracle@rac3-12c OPatch]$

[oracle@rac3-12c OPatch]$ . oraenv
ORACLE_SID = [contdb] ? contdb1
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@rac3-12c OPatch]$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.2.0 Production on Sun Feb 25 11:01:09 2018
Copyright (c) 1982, 2014, Oracle.  All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 CONPDB1                        READ WRITE NO
         4 CONPDB2                        READ WRITE NO
         5 CONPDB3                        READ WRITE YES

SQL> select inst_id, name, restricted from gv$containers;

   INST_ID NAME                           RES
---------- ------------------------------ ---
         1 CDB$ROOT                       NO
         1 PDB$SEED                       NO
         1 CONPDB1                        NO
         1 CONPDB2                        NO
         1 CONPDB3                        YES
         2 CDB$ROOT                       NO
         2 PDB$SEED                       NO
         2 CONPDB1                        NO
         2 CONPDB2                        NO
         2 CONPDB3                        YES

10 rows selected.

SQL> alter pluggable database CONPDB3 close immediate instances=ALL;
Pluggable database altered.

SQL> select inst_id, name, restricted from gv$containers;

   INST_ID NAME                           RES
---------- ------------------------------ ---
         1 CDB$ROOT                       NO
         1 PDB$SEED                       NO
         1 CONPDB1                        NO
         1 CONPDB2                        NO
         1 CONPDB3
         2 CDB$ROOT                       NO
         2 PDB$SEED                       NO
         2 CONPDB1                        NO
         2 CONPDB2                        NO
         2 CONPDB3

10 rows selected.

SQL> alter pluggable database CONPDB3 open read write instances=ALL;
Pluggable database altered.

SQL> select inst_id, name, restricted from gv$containers;

   INST_ID NAME                           RES
---------- ------------------------------ ---
         1 CDB$ROOT                       NO
         1 PDB$SEED                       NO
         1 CONPDB1                        NO
         1 CONPDB2                        NO
         1 CONPDB3                        NO
         2 CDB$ROOT                       NO
         2 PDB$SEED                       NO
         2 CONPDB1                        NO
         2 CONPDB2                        NO
         2 CONPDB3                        NO

10 rows selected.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 CONPDB1                        READ WRITE NO
         4 CONPDB2                        READ WRITE NO
         5 CONPDB3                        READ WRITE NO

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 CONPDB1                        READ WRITE NO
         4 CONPDB2                        READ WRITE NO
         5 CONPDB3                        READ WRITE NO

SQL> connect sys/oracle@192.168.2.201:1521/conpdb3 as sysdba
Connected.

SQL> show con_name

CON_NAME
------------------------------
CONPDB3

SQL> connect user1/oracle@192.168.2.201:1521/conpdb3
Connected.

SQL> select * from tab1;

        NO
----------
         1

SQL>