Amazon Partner

Wednesday, 22 September 2021

Building Containerized micro services in AWS with Multi Region High Availablity Architecture

we will look into designing and building AWS-based multi-region active-passive as well as Active-Active architecture and utilize microservices deployed using Docker containers in EKS / ECS as well as AWS Fargate offering.

Requirements and Assumptions

To facilitate the above, we will make up some sample requirements and assumptions, which will require a multi-region active-active deployment.

  • Tier 1 Web Application: 99.99% availability. Annual Downtime: 52 mins.
  • RPO (Recovery Point Objective): Near Real-time.
  • RTO (Recovery Time Objective): n/a (Always active).
  • Application must have Multi-Region deployments to serve global user traffic.
  • User traffic must be served from their own, or nearest AWS region.
  • Components of the app:
  • Rich Internet Application (RIA) built using Angular 7+. Angular app will access micro services deployed in Docker containers.
  • ElasticSearch will be used as a search engine, and will be exposed to the app via a micro service.
  • Micro services will be developed using Java (Spring Boot), and deployed in Docker containers.
  • There is no advisory on infrastructure sizing for any of the components (as it will depend on the non-functional requirements of a specific application), and hence ignored in this article.

What is an Active-Active Architecture?

  • An active/active system is a network of independent processing nodes, each having access to a (..) replicated database such that all nodes can participate in a common application [4].
  • For all practically purpose, we will have two or more ‘sets’ of applications running in two more geographically dispersed different AWS regions. Both regions will have all the required components and data, so that in case one region fails, another can automatically take over user traffic from the failed region without having a downtime for the globally dispersed app users.

Active-Active vs Active-Passive Architectures

  • Active-active architecture requires that we have all components running in both regions at the same time
  • Active-passive architecture suggests we have all components running only in the ‘active’ region/data centre, but have mechanisms in place to automatically (or semi-automatically/manually) failover to to passive region and install/start components/application in the passive region as quickly as possible.
  • From an AWS perspective, this means that you have already configured VPC (security groups, etc) in the passive region, have some sort of data backups in that region, and can quickly deploy and start your components in that region (when failover occurs). There is a whole lot of other issues to consider (such as availability of data snapshots, etc) in the passive region, so that you can recover data and re-create database and other necessary app data in the passive region, and make it active. Those are beyond the scope of this article.

Rationale for Multi-Region Active-Active Architecture

Below are the some of the key reasons why you might consider building a multi-region active-active architecture:

  • Improved latency for end users – Closer your backed is to end users, the better customer experience they will have.
  • Disaster recovery – Mitigate against a whole region failure.
  • Business Requirements – Tier 1 apps with 99.99%+ availability non-functional requirements.

Key AWS Services

Below are some of the key AWS services that can be leveraged to achieve some of the above requirements:

  • Route 53 -DNS service, Traffic Flow management across AWS regions
  • CloudFront – Global Content Delivery Network (CDN)
  • AWS Aurora Global Database – Designed for globally distributed applications, allowing a single Amazon Aurora database to span multiple AWS regions. Cross Region: Recovery Point Objective (RPO) of 1 second and a Recovery Time Objective (RTO) of less than 1 minute (in case of a region failure)
  • AWS ElasticSearch (ES) – Managed service to deploy, secure, and operate Elasticsearch at scale with zero down time
  • AWS Elastic Container Service (ECS) – Container orchestration service that supports Docker containers and allows you to easily run and scale containerised applications on AWS. With ECS, you can either use EC2 instances or AWS Fargate to deploy your containers.
  • AWS Elastic Container Registry (ECR) – Managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images
  • AWS Fargate – Compute engine for Amazon ECS that allows you to run containers without having to manage servers or clusters

Additionally a global solution will benefit from the following services (not shown in the deployment diagram):

  • AWS Shield – Managed Distributed Denial of Service (DDoS) protection service
  • AWS CloudTrail – Enables governance, compliance, operational auditing, and risk auditing of your AWS account
  • Elastic Logstash Kibana (ELK) Stack – ELK stack to extract logs from different services in one central place for issue analysis
  • AWS ElastiCache – Managed, Redis or Memcached-compatible in-memory data store. You can use it to offload traffic from your RDS Aurora DB or ElasticSearch cluster.

AWS Route 53 Routing Policies

AWS Route 53 provides several routing policies, which can be leveraged to route traffic to a certain region based on different factors. Some of the key policies relevant for a multi-region active-active architecture are listed below [1]:

  • Failover routing policy – Use when you want to configure active-passive failover. For example, if US region fails, route US traffic to EU region (failover region for US).
  • Geolocation routing policy – Use when you want to route traffic based on the location of your users.
  • Geoproximity routing policy – Use when you want to route traffic based on the location of your resources and, optionally, shift traffic from resources in one location to resources in another.
  • Latency routing policy – Use when you have resources in multiple AWS Regions and you want to route traffic to the region that provides the best latency.
  • Multivalue answer routing policy – Use when you want Route 53 to respond to DNS queries with up to eight healthy records selected at random.

Multi-Region Active-Active Overall Architecture

  • You can route a user’s API requests based on their geolocation (Route EU users to EU region)
  • Define failover regions for each region (If EU region fails, route traffic to US region so that users can still access the app)

Detailed Deployment Architecture for a Region

Each AWS region will have a deployment architecture similar to the below.

  • Bastion Host – There will be a bastion host in each region. Developers/admin can SSH to this host, and from there they can access RDS, Microservices and ElasticSearch. There is a Network Load Balancer (NLB) in front of the Bastion host, as you can have multiple low spec Bastion Hosts in two Availability Zones (AZ). If a bastion host instance is replaced by AWS, if would not affect your developers. They can still SSH using the same NLB DNS address.
  • CloudFront – Global users will access the Angular app via CloudFront CDN for low latency downloads.
  • AWS S3 – Angular based app will be deployment in S3.
  • Router 53 – All user traffic will go via Route 53, which will make routing decisions regarding which region the users requests should be served from.
  • AWS ALB (Application Load Balancers) – Each region will have its own ALB, which will route traffic to different micro services running in the Docker containers. For example, all product related web service calls will go to the ‘product’ micro services container. To achieve this, you will define a Target Group for each micro service, and map different request paths to a target group. For example, ‘/product/’ path could be mapped to a Target Group that contains all your containers running the Product micro service.
  • ECS Fargate – It will contain all your containers running different micro services.
  • AWS ES – It will contain for example product related indexed data, and will be used by the ‘search’ micro service.
  • RDS Aurora DB – It will contain all your product and user data. Each region will have its own primary and standby replica. If primary goes down in a region, AWS will promote the standby as primary. If the whole region containing primary goes down, you will have to promote another regions RDS Aurora cluster as the primary for the Aurora Global DB.

Security Architecture

In each Availability Zone (AZ), there is 1 public subnet, and 2 private subnets to segregate components. Load balancers are in the public subnet, while rest (ECS containers, ElasticSearch, DB) are in private subnets.

Security Groups (SG)

Traffic flow between different components is controlled via Security Groups. A security group allows traffic only on a certain port to another security group.

  • Application Load Balancer (ALB) SG – Allow traffic on port 443 from public internet
  • Network ALB SG – allow traffic on port 22 from your office IP range.
  • Bastion SG – allow traffic on port 22 only from Network ALB SG
  • Elastic Search SG – allow traffic on port 9300 from Bastion SG and Application Load Balancer (ALB) SG
  • ECS Fargate SG – allow traffic on port 8080 (running Spring Bootstrap) from ALB SG.
  • Database SG – allow traffic on port 3306 from bastion SG and ECS Fargate SG.

Access to ElasticSearch on local machine

To access AWS ElasticSearch Service (ES) from a local machine, developers can create an SSH tunnel via the bastion host. That way, they can access Kibana and ElasticSearch REST API locally in their browser to diagnose any problems with the data contained in the search cluster.

For example. add below to your ~/.ssh/config file to access Kibana and ElasticSearch REST API locally on your machine.

Note: Please remember to replace the keypair file name, bastion host ALB public DNS name and private DNS name of your ElasticSearch cluster.

# Elasticsearch Tunnel
# https://www.jeremydaly.com/access-aws-vpc-based-elasticsearch-cluster-locally/
Host estunnel
HostName multi-bastion-lb-abc.elb.eu-west-2.amazonaws.com
User ec2-user
IdentitiesOnly yes
IdentityFile ~/.ssh/multi-keypair.pem
LocalForward 9200 vpc-multi-es-domain-abc.eu-west-2.es.amazonaws.com:443

After this you can access Kibana locally in the browser:

https://localhost:9200/_plugin/kibana/app/kibana#/management/kibana/index?_g=()

And test access to ES API via curl or use Postman to test different API requests.

$ curl -k https://localhost:9200/

Same kind of access could be configured to access RDS Aurora locally (for Dev environments).

AWS Route 53 Routing Policy

AWS provides traffic flow visual editor that can be used to to configure several routing policies as mentioned above in this article.

  • For example, if you are using three regions US, EU, & APAC.
  • You may direct all US users HTTP requests to your US region VPC using Geolocation based routing policy.
  • To have automatic failover for different regions, you can for example define EU as a failover regions for US. This way, if the whole US region goes down, Route 53 will route traffic destined for US region to your failover EU regions VPC.
  • You can repeat the same for EU and APAC regions.

Example Routing Policy

Multi-Region Active-Active Architecture Costs

  • Running a redundant set of components across multiple regions is by no means a cheaper option. It is costly!
  • Thats why majority of the applications are either deployed in a single region within Multi AZs or use an active-passive architecture where active region has all the required components, while passive region has just bare bone VPC and some data backups.
  • So, unless you need to absolutely meet the active-active multi-region requirements as outlined in the beginning of this article, you may be fine just deploying in a single region.

Summary

  • Multi-region active-active architectures are expensive to build and maintain, but are essential if your app requires low latency for your global users, is a Tier 1 app with 99.99% availability requirements, and requires built in disaster recovery capability across regions.
  • AWS RDS Global DB simplifies the cross region DB replication, and facilitates active-active architecture.
  • Using AWS Fargate along with ECS Registry (ECR) simplifies running containers in AWS, and deployment and updates of microservices containers. Developers can easily build & test a Docker container locally, and push it to ECR. From there, they can easily pushed to ECS without any downtime for existing microservices as ECS gradually replaced the old containers with the new containers.
  • There are some tech choices made in this article, but please do your own due diligence to choose the correct technologies for your own specific use case. For example, using ECS vs Kubernetes, using EC2 or Fargate with ECS, using RDS Global Database vs DynamoDB Global Tables.

Tuesday, 6 February 2018

Application Containers



### Raw version of application container Demo.
### Will convert into full Detail blog by some unknown date.


sqlplus  "/ as sysdba"

show parameter create
show pdbs


CREATE  pluggable database  APPROOT AS APPLICATION CONTAINER ADMIN USER USER1 IDENTIFIED BY USER1;

 -- sqlplus  sys/welcome1@localhost:1522/APPROOT  as sysdba

alter session set container=APPROOT ;

alter pluggable database APPROOT open;

alter pluggable database application demo begin install '1.0';

create tablespace app datafile size 1m autoextend on next 1m;
create user app_user identified by app_user default tablespace app quota unlimited on app;

GRANT CREATE SESSION, CREATE TABLE TO  app_user;


create table app_user.tab1 SHARING=METADATA
(
col1 number,
col2 date,
col3 varchar2(20));

create table app_user.tab2 SHARING=DATA
(
col1 number,
col2 date,
col3 varchar2(20));


create table app_user.tab3 SHARING=EXTENDED DATA
(
col1 number,
col2 date,
col3 varchar2(20));

insert into app_user.tab1 select 1,sysdate,'meta-'||name from v$pdbs ;
insert into app_user.tab2 select 1,sysdate,'data-'||name from v$pdbs ;
insert into app_user.tab3 select 1,sysdate,'extend'||name from v$pdbs ;


commit;


colum app_name format a30

select app_name,app_version,app_status from DBA_APPLICATIONS ;

alter pluggable database application demo end install;





####

CREATE  pluggable database  APP1 ADMIN USER USER1 IDENTIFIED BY USER1;

alter pluggable database  APP1  open;


alter session set container=APP1;

select col1,col2,col3 from app_user.tab1;
select col1,col2,col3 from app_user.tab2;
select col1,col2,col3 from app_user.tab3;



 ALTER PLUGGABLE DATABASE APPLICATION ALL SYNC;

insert into app_user.tab1 select 2,sysdate,'meta-'||name from v$pdbs ;
insert into app_user.tab2 select 2,sysdate,'data-'||name from v$pdbs ;
insert into app_user.tab3 select 2,sysdate,'extend'||name from v$pdbs ;


commit;

create table app_user.app1tab4
(
col1 number,
col2 date,
col3 varchar2(20));





### another application pdbs

alter session set container=APPROOT ;

CREATE  pluggable database  APP2 ADMIN USER USER1 IDENTIFIED BY USER1;

alter pluggable database  APP2  open;


alter session set container=APP2;


 ALTER PLUGGABLE DATABASE APPLICATION ALL SYNC;


select col1,col2,col3 from app_user.tab1;
select col1,col2,col3 from app_user.tab2;
select col1,col2,col3 from app_user.tab3;


insert into app_user.tab1 select 2,sysdate,'meta-'||name from v$pdbs ;
insert into app_user.tab2 select 2,sysdate,'data-'||name from v$pdbs ;
insert into app_user.tab3 select 2,sysdate,'extend'||name from v$pdbs ;

commit;




select col1,col2,col3 from app_user.tab1;
select col1,col2,col3 from app_user.tab2;
select col1,col2,col3 from app_user.tab3;


create table app_user.app2tab5
(
col1 number,
col2 date,
col3 varchar2(20));







### root container
select app_name,app_version , app_status from DBA_APP_PDB_STATUS ;



==== clear demo
## connect to root Container

alter session set container=APPROOT ;

alter pluggable database APP1  close;
drop pluggable database APP1 including datafiles;

alter pluggable database APP2  close;
drop pluggable database APP2 including datafiles;


alter session set container=CDB$ROOT ;
alter pluggable database APPROOT  close;
drop pluggable database APPROOT including datafiles;




Tuesday, 21 March 2017

Oracle Database 12c Certified Master Upgrade Exam - OCM 12C


Recently I have attended the Oracle 12C OCM upgrade exam and i can tell you that it's not easy exam considering the amount of topics covered in one.

If you are also looking to attend in near future you need to be quick if planing to attend in Europe as all of Oracle education/Examination Centre are scheduled to close. last one will be Oracle Netherlands in Utrecht by end of April 2017.

Post April 2017 all exam are expected to conducted only in Reading, UK in Europe reason, not sure if UK will still be in EU that time. :-)

Even though 12.2 is out from some time on cloud and now in Premise as well from Feb 2017 but Exam is still tested against 12.1.0.2.0. But we might see some changes adapted to 12.2. We expect oracle will announce before adapting this.

During my preparation i noticed Oracle education website changed and in some cases i could not even find the topic details, however they seems to appear again now, so i thought to share with everyone before it disappear again there,

Passing score for this exam is :Overall 60% 

I remember not all module need 60% to pass, like Module 2  (Data and Performance Management) need only 32% but overall must be 60% , but this information is not available anymore on website. i will update if manage to find somewhere.



General Database and Network Administration, and Backup Strategy
  • Create and manage pluggable databases
  • Create and manage users, roles, and privileges
  • Configure the network environment to allow connections to multiple databases
  • Protect the database from loss of data due to any kind of failure
  • Create and manage database configuration files
Data and Performance Management
  • Modify materialized views
  • Create a plugged-in tablespace by using the transportable tablespace feature
  • Create partitioned tables
  • Configure the database to retrieve all previous versions of the table rows
  • Configure the Resource Manager
  • Tune SQL statements
  • Perform real application testing
  • Create SQL Plan baselines
Data Guard
  • Create a physical standby database
  • Make the standby database available for testing
  • Restore the standby database to its normal function
  • Configure fast start failover
Grid Infrastructure and Real Application Clusters
  • Install Oracle Grid Infrastructure
  • Configure Oracle Flex ASM
  • Create ASM disk groups
  • Create and manage an ASM instance
  • Create ACFS
  • Start, stop, configure, and administer Oracle Grid Infrastructure
  • Install the Oracle Database 12software
  • Create RAC databases
  • Configure services

Happy Learning and Best of luck if you planning to attend in near future.

Please be aware OCM profiles are no longer hosted like with static URL, its now simply show your OTN profile. 

New URL is https://community.oracle.com/community/technology_network_community/certification/ocm where all OCM'  are displayed in very unstructured way. :-( , I Personally didn't like it, but doesn't really matter. 



--
Krishan Jaglan
OCM 12C 

Wednesday, 15 February 2017

Step by step Oracle Database 12.2 for Exadata - Download today

For those who are waiting for Oracle database 12cR2 ( Oracle 12.2), good news is that, it's available for download since 10 February 2017 from Oracle e-delivery website.

Here are the step by step instructions on downloading.


Step1: Go to https://edelivery.oracle.com and login using your Oracle Metalink username/Passsord.
Step2: Search for Oracle Database Enterprise Edition
Step3: Select Linux x86 - 64 from "Select platform" list



Step4 : Press continue
Step5:  click little down arrow key to see details and make sure your selection is correct.

Step6:  Press continue and accept oracle Term and Conditions.



and Oracle 12.2 is available to download and install on your Test system so you can start planning the upgrade when you fully tested against your application.


Please beware this download is available for Exadata  and it will be available for  standard x86-64 bit platform from 15-March 2017.

Here is the list of version available for different component.  you will notice all components are available for 12.2 except database client on windows platform which is still 12.1.0.2


Oracle Database Client (12.2.0.1.0 Exadata/SuperCluster)
V839967-01.zip Oracle Database 12c Release 2 Client (12.2.0.1.0) for Linux x86-64
V839968-01.zip Oracle Database 12c Release 2 Client (12.2.0.1.0) for Linux x86

Oracle Database Global Service Manager (12.2.0.1.0 Exadata/SuperCluster)
V840019-01.zip Oracle Database 12c Release 2 Global Service Manager (12.2.0.1.0) for Linux x86-64

Oracle Database Grid Infrastructure (12.2.0.1.0 Exadata/SuperCluster)
V840012-01.zip Oracle Database 12c Release 2 Grid Infrastructure (12.2.0.1.0) for Linux x86-64

Oracle Database Client (12.1.0.2.0)
V47121-01.zip Oracle Database 12c Release 1 Client (12.1.0.2.0) for Microsoft Windows x64 (64-bit)
V47124-01.zip Oracle Database 12c Release 1 Client (12.1.0.2.0) for Microsoft Windows (32-bit)

Oracle HTTP Server (12.2.1.1.0)

V266898-01.zip Oracle Fusion Middleware 12c (12.2.1.1.0) HTTP Server for Linux x86-64


Happy Upgrading. !!!!






Saturday, 21 January 2017

How to Enable Unified Auditing in Oracle 12c database

Unified Auditing:
Oracle 12c introduced the consolidated way of auditing Oracle database. It introduces the Simplicity with little or minimal overhead to database performance.

It comes with the following features.


  • Simplicity
  • Consolidation
  • Security 
    • It rely on read only audit trail table
    • It audit all configuration related operations
    • Seperation of duties 
  • Performance
    • Implemented using queue in Oracle SGA, leaving very overhead in database performance

Unified Auditing Architecture
  1. User perform auditable action
  2. Audit records in SGA based Queue in memory
  3. either GEN0 process flush queue to disk on regular interval or you can perform manual flush on demand (EXECUTE SYS.DBMS_AUDIT_MGMT.FLUSH_UNIFIED_AUDIT_TRAIL)
  4. once data flushed to disk, its available via SYS.UNIFIED_AUDIT_TRAIL 
There two mode of setup Queued or

How to Enable :

You need outage to enable unified auditing as it will be done by relinking the Oracle Library.  Shutdown all oracle process before relinking.

oracle@dbserver01:~$. oraenv
ORACLE_SID = [CDB2] ? CDB2
The Oracle base remains unchanged with value /u01/app/oracle
oracle@dbserver01:~$    

oracle@dbserver01:~$  lsnrctl stop 

oracle@dbserver01:~$  sqlplus "/ as sysdba"

SQL*Plus: Release 12.1.0.2.0 Production on Sun Jan 22 06:15:47 2017

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> shutdown immediate;
Database closed.
Database dismounted.

ORACLE instance shut down.




oracle@dbserver01:~$ cd $ORACLE_HOME/rdbms/lib
oracle@dbserver01:/u01/app/oracle/product/12.1.0.2/rdbms/lib$ make -f ins_rdbms.mk uniaud_on ioracle ORACLE_HOME=$ORACLE_HOME
/usr/bin/ar d /u01/app/oracle/product/12.1.0.2/rdbms/lib/libknlopt.a kzanang.o
/usr/bin/ar cr /u01/app/oracle/product/12.1.0.2/rdbms/lib/libknlopt.a /u01/app/oracle/product/12.1.0.2/rdbms/lib/kzaian
g.
o
ch
mod 755 /u01/app/oracle/product/12.1.0.2/bin

 - Linking Oracle
rm -f /u01/app/oracle/product/12.1.0.2/rdbms/lib/oracle
/u01/app/oracle/product/12.1.0.2/bin/orald  -o /u01/app/oracle/product/12.1.0.2/rdbms/lib/oracle -m64 -z noexecstack -Wl,--disable-new-dtags -L/u01/app/oracle/product/12.1.0.2/rdbms/lib/ -L/u01/app/oracle/product/12.1.0.2/lib/ -L/u01/app/oracle/product/12.1.0.2/lib/stubs/   -Wl,-E /u01/app/oracle/product/12.1.0.2/rdbms/lib/opimai.o /u01/app/oracle/product/12.1.0.2/rdbms/lib/ssoraed.o /u01/app/oracle/product/12.1.0.2/rdbms/lib/ttcsoi.o -Wl,--whole-archive -lperfsrv12 -Wl,--no-whole-archive /u01/app/oracle/product/12.1.0.2/lib/nautab.o /u01/app/oracle/product/12.1.0.2/lib/naeet.o /u01/app/oracle/product/12.1.0.2/lib/naect.o /u01/app/oracle/product/12.1.0.2/lib/naedhs.o /u01/app/oracle/product/12.1.0.2/rdbms/lib/config.o  -lserver12 -lodm12 -lcell12 -lnnet12 -lskgxp12 -lsnls12 -lnls12  -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 -lclient12  -lvsn12 -lcommon12 -lgeneric12 -lknlopt `if /usr/bin/ar tv /u01/app/oracle/product/12.1.0.2/rdbms/lib/libknlopt.a | grep xsyeolap.o > /dev/null 2>&1 ; then echo "-loraolap12" ; fi` -lskjcx12 -lslax12 -lpls12  -lrt -lplp12 -lserver12 -lclient12  -lvsn12 -lcommon12 -lgeneric12 `if [ -f /u01/app/oracle/product/12.1.0.2/lib/libavserver12.a ] ; then echo "-lavserver12" ; else echo "-lavstub12"; fi` `if [ -f /u01/app/oracle/product/12.1.0.2/lib/libavclient12.a ] ; then echo "-lavclient12" ; fi` -lknlopt -lslax12 -lpls12  -lrt -lplp12 -ljavavm12 -lserver12  -lwwg  `cat /u01/app/oracle/product/12.1.0.2/lib/ldflags`    -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lnro12 `cat /u01/app/oracle/product/12.1.0.2/lib/ldflags`    -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lnnzst12 -lzt12 -lztkg12 -lmm -lsnls12 -lnls12  -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 -lztkg12 `cat /u01/app/oracle/product/12.1.0.2/lib/ldflags`    -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lnro12 `cat /u01/app/oracle/product/12.1.0.2/lib/ldflags`    -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lnnzst12 -lzt12 -lztkg12   -lsnls12 -lnls12  -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 `if /usr/bin/ar tv /u01/app/oracle/product/12.1.0.2/rdbms/lib/libknlopt.a | grep "kxmnsd.o" > /dev/null 2>&1 ; then echo " " ; else echo "-lordsdo12 -lserver12"; fi` -L/u01/app/oracle/product/12.1.0.2/ctx/lib/ -lctxc12 -lctx12 -lzx12 -lgx12 -lctx12 -lzx12 -lgx12 -lordimt12 -lclsra12 -ldbcfg12 -lhasgen12 -lskgxn2 -lnnzst12 -lzt12 -lxml12 -locr12 -locrb12 -locrutl12 -lhasgen12 -lskgxn2 -lnnzst12 -lzt12 -lxml12  -lgeneric12 -loraz -llzopro -lorabz2 -lipp_z -lipp_bz2 -lippdcemerged -lippsemerged -lippdcmerged  -lippsmerged -lippcore  -lippcpemerged -lippcpmerged  -lsnls12 -lnls12  -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 -lsnls12 -lunls12  -lsnls12 -lnls12  -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 -lasmclnt12 -lcommon12 -lcore12  -laio -lons    `cat /u01/app/oracle/product/12.1.0.2/lib/sysliblist` -Wl,-rpath,/u01/app/oracle/product/12.1.0.2/lib -lm    `cat /u01/app/oracle/product/12.1.0.2/lib/sysliblist` -ldl -lm   -L/u01/app/oracle/product/12.1.0.2/lib
test ! -f /u01/app/oracle/product/12.1.0.2/bin/oracle ||\
           mv -f /u01/app/oracle/product/12.1.0.2/bin/oracle /u01/app/oracle/product/12.1.0.2/bin/oracleO
mv /u01/app/oracle/product/12.1.0.2/rdbms/lib/oracle /u01/app/oracle/product/12.1.0.2/bin/oracle
chmod 6751 /u01/app/oracle/product/12.1.0.2/bin/oracle


By Default oracle Defined Two policies (ORA_SECURECONFIG and ORA_LOGON_FAILURES)  get enabled. 

Check Current Enabled policy in database by default.

oracle@dbserver01:~$ sqlplus "/ as sysdba"

SQL*Plus: Release 12.1.0.2.0 Production on Sun Jan 22 06:50:28 2017

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options


SQL> select * from audit_unified_enabled_policies;

USER_NAME                      POLICY_NAME                    ENABLED_ SUC FAI
------------------------------ ------------------------------ -------- --- ---
ALL USERS                      ORA_SECURECONFIG               BY       YES YES
ALL USERS                      ORA_LOGON_FAILURES             BY       NO  YES


Once unified auditing is enable all audit_xx parameters will be ignored and will have no impact.



Auditing can be enabled in two modes

  • Queued Write mode (Default ) - In this mode you might loose some audit data in case of instance crash (data which was not flushed to disk at time of instance crash).
  • Immediate Write mode - this will ensure no audit data is lost. The audit records are written immediately.


Unified auditing is enabled in Queued Write mode by default to ensure minimal performance overhead.

How to switch mode:

.• Immediate Write mode:

SQL> EXECUTE  DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_PROPERTY(DBMS_AUDIT_MGMT.AUDIT_TRAIL_UNIFIED, DBMS_AUDIT_MGMT.AUDIT_TRAIL_WRITE_MODE, DBMS_AUDIT_MGMT.AUDIT_TRAIL_IMMEDIATE_WRITE);


• Queued Write mode:

SQL> EXECUTE  DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_PROPERTY(DBMS_AUDIT_MGMT.AUDIT_TRAIL_UNIFIED, DBMS_AUDIT_MGMT.AUDIT_TRAIL_WRITE_MODE, DBMS_AUDIT_MGMT.AUDIT_TRAIL_QUEUED_WRITE);




Friday, 20 January 2017

Change Sysman Password in OEM repository


Recently I come across the issue when sysman password was changed as it was expired based on password policy.

We have changed the password for sysman user in database using the following command .

alter user sysmand identified by new_password;

then we check the status of OMS and it failed with following error , which is genuine.

[oracle@exadata-an-ora-oem middleware]$ emctl status oms
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
WebTier is Up
Oracle Management Server is not functioning because of the following reason:
Connection to the repository failed. Verify that the repository connection information provided is correct.
Check EM Server log file for details: /u01/app/oracle/gc_inst/user_projects/domains/GCDomain/servers/EMGC_OMS1/logs/EMGC_OMS1.out
JVMD Engine is Down
BI Publisher Server is Down



Error message above itself is explanatory  what the issue is with OMS.  Even if you haven't changed the sysman password itself and you come across this issue, you can easily figure out OMS is not able to connect to repository and so something wrong with either database/listener/ or password.

i checked database was running, so was listener, so we pretty much knew the it was database for sure.

Fix:
update the new password in repository.

emctl config oms -list_repos_details

You have two option.

Option1: if you don't know the old sysman password.

[oracle@oemserver middleware]$ emctl config oms -change_repos_pwd -use_sys_pwd -sys_pwd sys_user_password -new_pwd new_password_4_sysman

Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.

Changing passwords in backend ...
Passwords changed in backend successfully.
Updating repository password in Credential Store...
Successfully updated Repository password in Credential Store.
Restart all the OMSs using 'emctl stop oms -all' and 'emctl start oms'.
Successfully changed repository password.


OR

Option2: You know sysman old password

[oracle@oemserver middleware]$ emctl config oms -change_repos_pwd -old_pwd sysmanoldpassword -new_pwd mynewpassword
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.

Changing passwords in backend ...
Passwords changed in backend successfully.
Updating repository password in Credential Store...
Successfully updated Repository password in Credential Store.
Restart all the OMSs using 'emctl stop oms -all' and 'emctl start oms'.
Successfully changed repository password.


Reference :
       emctl config oms -change_repos_pwd [-old_pwd ] [-new_pwd ] [-use_sys_pwd [-sys_pwd ]]
          Note: Steps in changing Enterprise Manager Root (SYSMAN) password are:
                1) Stop all the OMSs using 'emctl stop oms'
                2) Run 'emctl config oms -change_repos_pwd' on one of the OMSs
                3) Restart all the OMSs using 'emctl stop oms -all' and 'emctl start oms'


Tuesday, 26 April 2016

Register today for IOUG's May 2 & 3, 2016 Oracle Exadata Virtual Conference (FREE) !

Register today for IOUG's May 2 & 3, 2016 Oracle Exadata Virtual Conference (FREE) !

Register today for May 2 & 3, 2016 Oracle Exadata Virtual Conference (FREE)!
Brought to you by IOUG Exadata SIG.

https://attendee.gotowebinar.com/register/844766152064328705

Join IOUG's Exadata SIG for the most comprehensive online Exadata event you can attend this year - and it's free! Register for two days of expert insight from community members, partners and Oracle team members. The schedule is set - reserve your spot for the Virtual Conference today!

Monday Sessions

Oracle Exadata X6: Technical Deep Dive - Architecture and Internals [May 2: 10:00 a.m. - 11:00 a.m. CT]
Featured Speaker: Manish Shah, Senior Principal Product Manager, Oracle
Oracle Public Cloud Machine: Bringing the Oracle Cloud On Premise [May 2: 11:00 a.m. - 12:00 p.m. CT]
Featured Speaker: Srini Chavali, Product Management Director, Oracle
Exadata Database Machine Security [May 2: 12:00 p.m. - 1:00 p.m. CT]
Featured Speaker: Dan Norris, Consulting Member of Technical Staff, Oracle
Tuesday Sessions

The General Electric (GE) Power Journey: Oracle E-Business Suite 12.2 on Oracle Exadata and Exalogic [May 3: 10:00 a.m. - 11:00 a.m. CT]
Featured Speaker: Gary Gordhamer, General Electric

SAS Institute Applications with Oracle Exadata and Big Data Appliance: Turning Data into Knowledge [May 3: 11:00 a.m. - 12:00 p.m.]
Featured Speaker: Speaker TBD, SAS Institute
Database Machine Administration (DBMA) and Database Administration (DBMA): Similarities and Differences, What you need to know [May 3: 12:00 p.m. - 1:00 p.m.]
Featured Speaker: Vivek Puri, Sherwin Williams