Thursday, May 26, 2016

Oracle : Check INACTIVE sessions with High Disk IO

 
#########################################
INACTIVE sessions with High Disk IO
#########################################



select p.spid,s.username, s.sid,s.status,t.disk_reads, s.last_call_et/3600 last_call_et_Hrs,
s.action,s.program,s.machine cli_mach,s.process cli_process,lpad(t.sql_text,30) “Last SQL”
from gv$session s, gv$sqlarea t,v$process p
where s.sql_address =t.address and
s.sql_hash_value =t.hash_value and
p.addr=s.paddr and
t.disk_reads > 5000
and s.status=’INACTIVE’
and s.process=’1234′
order by S.PROGRAM;

What is the difference between a restart and a graceful restart of a web server?

Q: – What is the difference between a restart and a graceful restart of a web server?
During a normal restart, the server is stopped and then started, causing some requests to be lost. A graceful restart allows Apache children to continue to serve their current requests until they can be replaced with children running the new configuration

Linux BIND DNS server interview questions & answers




Q:1 What does BIND Stands for ?
Ans: BIND stands for Berkeley Internet Name Domain.
Q:2 What is DNS Server and its fundamentals ?
Ans: The Domain Name System (DNS) is a hierarchical, distributed database. It stores information for mapping Internet host names to IP addresses and vice versa, mail routing information, and other data used by Internet applications. Clients look up information in the DNS by calling a resolver library, which sends queries to one or more name servers and interprets the responses. The BIND 9 software distribution contains a name server, named, and a resolver library, liblwres.
Q:3 What is the default port of BIND ?
Ans: The BIND server is accessed via the network on port 53. Both TCP and UPD ports are used. Queries are made via UDP & Responses are made via UDP unless the response is too large to fit in a single packet , If the response won’t fit in a single UDP packet, then the response is returned via TCP.
Q:4 How will you define Domain Name ?
Ans: The data stored in the DNS is identified by domain names that are organized as a tree according to organizational or administrative boundaries. Each node of the tree, called a domain, is given a label. The domain name of the node is the concatenation of all the labels on the path from the node to the root node. This is represented in written form as a string of labels listed from right to left and separated by dots. A label need only be unique within its parent domain.
For example, a domain name for a host at the company Linuxtechi, Inc. could be mail.linuxtechi.com, where com is the top level domain to which mail.linuxtechi.com belongs, example is a subdomain of com, and ‘mail’ is the name of the host
Q:5 What are zone files in DNS server ?
Ans: The files which contain the data being served by the DNS system are called “Zone Files” They are made up of a series of “Resource Records”. A Zone File will always contain an SOA record as well as additional records.
Q:6 What are the different types of DNS Server ?
Ans: Primary Master : The authoritative server where the master copy of the zone data is maintained is called the primary master server, or simply the primary. Typically it loads the zone contents from some local file edited by humans or perhaps generated mechanically from some other local file which is edited by humans. This file is called the zone file or master file.
Slave Server : The other authoritative servers, the slave servers (also known as secondary servers) load the zone contents from another server using a replication process known as a zone transfer. Typically the data are transferred directly from the primary master, but it is also possible to transfer it from another slave. In other words, a slave server may itself act as a master to a subordinate slave server.
Caching Name Server : Caching Name server is not authoritative for any zone, all queries are forwarded to other DNS servers if they are not stored in the DNS-cache zone. Answers for all queries are cached in DNS-cache zone for a time.
Forwarding : In this type of DNS server , all queries are forwarded to a specific list of name servers
Q:7 How the load balancing is achieved using DNS ?
Ans: A primitive form of load balancing can be achieved in the DNS by using multiple records (such as multiple A records) for one name. For example, if you have three WWW servers with network addresses of 10.0.0.1, 10.0.0.2 and 10.0.0.3, a set of records such as the following means that clients will connect to each machine one third of the time
multiple-a-records
When a resolver queries for these records, BIND will rotate them and respond to the query with the records in a different order. In the example above, clients will randomly receive records in the order 1,2, 3; 2, 3, 1; and 3, 1, 2. Most clients will use the first record returned and discard the rest.
Q:8 How to check syntax of named.conf is correct or not ?
Ans: named-checkconf is the command, which checks the syntax of named.conf file.
# named-checkconf /etc/named.conf
If bind is running in chroot environment use below command
# named-checkconf -t /var/named/chroot /etc/named.conf
Q:9 What are the different types of Resource Records in bind ?
Ans: Below are the list of resource records in bind :
SOA – start of authority, for a given zone
NS – name server
A – name-to-address mapping
PTR – address-to-name mapping
CNAME – canonical name (for aliases)
MX – mail exchanger (host to receive mail for this name)
TXT – textual info
RP – contact person for this zone
WKS – well known services
HINFO – host information
Comments start with ; continue to end of line
Q:10 Explain Bind chroot environment ?
Ans: Running bind in a chroot environment means named process will be limited to their directory only (/var/named/chroot). This can help improve system security by placing BIND in a ”sandbox”, which will limit the damage done if a server is compromised.
Q:11 What is domain delegation in Bind ?
Ans: Domain delegation means fully delegate the responsibility for a sub-domain to another name server.
Exmaple :
squid.linuxtechi.com IN NS ns2.linuxtechi.com
ns2.linuxtechi.com IN A 192.168.1.51

NFS server interview questions

Q:1 Why to use NFS ?
Ans: A Network File System (NFS) allows remote machine to mount file systems over a network and interact with those file systems as though they are mounted locally. This enables system administrators to consolidate resources onto centralized servers over the network.
Q:2 What is the default port of NFS server ?
Ans: By default NFS uses 2049 TCP port.
Q:3 What are different versions of NFS Server ?
Ans: Currently, there are three versions of NFS. NFS version 2 (NFSv2) is older and widely supported. NFS version 3 (NFSv3) supports safe asynchronous writes and is more robust at error handling than NFSv2; it also supports 64-bit file sizes and offsets, allowing clients to access more than 2Gb of file data.
NFS version 4 (NFSv4) works through firewalls and on the Internet, no longer requires an rpcbind service, supports ACLs, and utilizes stateful operations. Red Hat Enterprise Linux 6.X & Centos 6.X supports NFSv2,NFSv3, and NFSv4 clients. When mounting a file system via NFS, Red Hat Enterprise Linux uses NFSv4 by default, if the server supports it.
Q:4 What are configuration files of NFS server ?
Ans: ‘/etc/exports’ is the main configuration file that controls which file systems are exported to remote hosts and specifies options.
‘/etc/sysconfig/nfs‘ is the file through which we can fix ports for RQUOTAD_PORT, MOUNTD_PORT, LOCKD_TCPPORT, LOCKD_UDPPORT and STATD_PORT
Q:5 What are different options used in /etc/exports file ?
Ans: Below are list of options used in /etc/exports file :
ro: The directory is shared read only; the client machine will not be able to write to it. This is the default.
rw: The client machine will have read and write access to the directory.
root_squash: By default, any file request made by user root on the client machine is treated as if it is made by user nobody on the server. (Exactly which UID the request is mapped to depends on the UID of user “nobody” on the server, not the client.)
no_root_squash : if this option is used , then root on the client machine will have the same level of access to the files on the system as root on the server. This can have serious security implications, although it may be necessary if you want to perform any administrative work on the client machine that involves the exported directories. You should not specify this option without a good reason.
no_subtree_check : If only part of a volume is exported, a routine called subtree checking verifies that a file that is requested from the client is in the appropriate part of the volume. If the entire volume is exported, disabling this check will speed up transfers.
sync : Replies to the NFS request only after all data has been written to disk. This is much safer than async, and is the default in all nfs-utils versions after 1.0.0.
async : Replies to requests before the data is written to disk. This improves performance, but results in lost data if the server goes down.
no_wdelay : NFS has an optimization algorithm that delays disk writes if NFS deduces a likelihood of a related write request soon arriving. This saves disk writes and can speed performance
wdelay : Negation of no_wdelay , this is default
nohide : Normally, if a server exports two filesystems one of which is mounted on the other, then the client will have to mount both filesystems explicitly to get access to them. If it just mounts the parent, it will see an empty directory at the place where the other filesystem is mounted. That filesystem is “hidden”. Setting the nohide option on a filesystem causes it not to be hidden, and an appropriately authorised client will be able to move from the parent to that filesystem without noticing the change.
hide : Negation of nohide This is the default
Q:6 How to list available nfs share on local machine & remote machine ?
Ans: ‘showmount -e localhost’ : Shows the available shares on your local machine
‘showmount -e <Remote-server-ip or hostname>‘: Lists the available shares at the remote server
Q:7 What is pNFS ?
Ans: Parallel NFS (pNFS) as part of the NFS v4.1 standard is available as of Red Hat Enterprise Linux 6.4. The pNFS architecture improves the scalability of NFS, with possible improvements to performance. That is, when a server implements pNFS as well, a client is able to access data through multiple servers concurrently. It supports three storage protocols or layouts: files, objects, and blocks.
Q:8 What is the difference between Hard mount & Soft mount in nfs ?
Ans: Difference between soft mount and hard mount is listed below :
Soft Mount : Consider we have mounted a NFS share using ‘soft mount’ . When a program or application requests a file from the NFS filesystem, NFS client daemons will try to retrieve the data from the NFS server. But, if it doesn’t get any response from the NFS server (due to any crash or failure of NFS server), the NFS client will report an error to the process on the client machine requesting the file access. The advantage of this mechanism is “fast responsiveness” as it doesn’t wait for the NFS server to respond. But, the main disadvantage of this method is data corruption or loss of data. So, this is not a recommended option to use.
Hard Mount : Suppose we have mounted the NFS share using hard mount, it will repeatedly retry to contact the server. Once the server is back online the program will continue to execute undisturbed from the state where it was during server crash. We can use the mount option “intr” which allows NFS requests to be interrupted if the server goes down or cannot be reached. Hence the recommended settings are hard and intr options.
Q:9 How to check iostat of nfs mount points ?
Ans: Using command ‘nfsiostat‘ we can list iostat of nfs mount points. Use the below command :
# nfsiostat <interval> <count> <mount_point>
<interval> : specifies the amount of time in seconds between each report. The first report contains statistics for the time since each file system was mounted. Each subsequent report contains statistics collected during the interval since the previ-ous report.
<count> : If the <count> parameter is specified, the value of <count> determines the number of reports generated at seconds apart. if the interval parameter is specified without the <count> parameter, the command generates reports continuously.
<mount_point> : If one or more <mount point> names are specified, statistics for only these mount points will be displayed. Otherwise, all NFS mount points on the client are listed.
Q:10 How to check nfs server version ?
Ans: ‘nfsstat -o all’ command shows all information about active versions of NFS.
Q:11 What is portmap?
Ans: The portmapper keeps a list of what services are running on what ports. This list is used by a connecting machine to see what ports it wants to talk to access certain services.
Q:12 How to reexport all the directories of ‘/etc/exports’ file ?
Ans: Using the command ‘ exportfs -r ‘ , we can reexport or refresh entries of ‘/etc/exports’ file without

Wednesday, May 25, 2016

What Is Auto Scaling?

                                                             
What Is Auto Scaling?





Auto Scaling helps you ensure that you have the correct number of EC2 instances available to handle the load for your application. You create collections of EC2 instances, called Auto Scaling groups. You can specify the minimum number of instances in each Auto Scaling group, and Auto Scaling ensures that your group never goes below this size. You can specify the maximum number of instances in each Auto Scaling group, and Auto Scaling ensures that your group never goes above this size. If you specify the desired capacity, either when you create the group or at any time thereafter, Auto Scaling ensures that your group has this many instances. If you specify scaling policies, then Auto Scaling can launch or terminate instances as demand on your application increases or decreases.
For example, the following Auto Scaling group has a minimum size of 1 instance, a desired capacity of 2 instances, and a maximum size of 4 instances. The scaling policies that you define adjust the number of instances, within your minimum and maximum number of instances, based on the criteria that you specify.
An illustration of a basic Auto Scaling group.
For more information about the benefits of Auto Scaling, see Benefits of Auto Scaling.

Linux Password Expiration and Aging Using chage



Check if the user expiry date is reached or not by using chage command

[root@srinivaslinux1 ~]# chage -l webcat
Last password change : May 25, 2016
Password expires         : May 20, 2017
Password inactive          : never
Account expires : May 23, 2016
Minimum number of days between password change : 0
Maximum number of days between password change : 360
Number of days of warning before password expires : 7

If you see that the account expires use usermod or chage command to extend the user expiry time.

[root@srinivaslinux ~]# usermod -e 2020-05-10 webcat

Check if the user expiry date is changed or not 

[root@srinivaslinux ~]# chage -l webcat
Last password change : May 25, 2016
Password expires         : May 20, 2017
Password inactive          : never
Account expires : May 10, 2020
Minimum number of days between password change : 0
Maximum number of days between password change : 360
Number of days of warning before password expires : 7



Oracle : Script for Archive backup in Linux




########################

ARCH BACKUP IN LINUX

############################




#!/bin/ksh

# — Description: Oracle Archived Log Archival Job —

# — —

# — Instructions: —

# — 1) Create the following directories if not already setup: —

# — These directories receive the various output files from —

# — this job.

# — $ORACLE_BASE/backups —

# — $ORACLE_BASE/backups/jobs —

# — $ORACLE_BASE/backups/joblogs —

# — $ORACLE_BASE/backups/adsmlogs —

# — —

# — 2) Create a directory under the directory where the archived —

# — redo logs are kept. Call this sub-directory “save”. —

# — —

# — 3) Change variables below as needed —

# — —

# — —

# —————————————————————————

# ———— variables common to instance ——————————

ORACLE_HOME=/d001/oracle/9.2.0

ORACLE_SID=SDSS # oracle instance

ARCHIVE_DIR=/d002/oracle/$ORACLE_SID/backups # job logs, scripts etc….

COMMON_DIR=/d002/oracle/common/backups # common scripts

UID=”system/systemSDSS” # userid/password for

# sqlplus queries

RETPRD=3 # how many days do you

# want to keep joblogs etc..

ADSM_PROCESS_LIMIT=9 # max adsm processes

#ADSM_SERVER=”-se=mvsosap” # adsm server – leave commented

# for default server

#MGMTCLASS=”-ARCHMc=UNIX_MGT” # adsm management class

# leave commented for default

#FROMNODE=”-fromnode=`hostname`” #adsm fromnode parameter –

# leave commented for default

#FROMOWNER=”-fromowner=`/usr/ucb/whoami`” #adsm fromowner parameter –

# leave commented for default

SQLDBA=”sqlplus” # use svrmgr for v7.3

# ———– sendtrap variables ——————————

SNDTRP_FLG=N # do you want to use

# sendtrap (Y,N)

# if you are using sendtrap

# change the following

# VARx’s to instruct

# operations what to do

VAR1=”….Archive Backup”

VAR2=$ORACLE_SID

VAR3=”Database is down – Qname= – contact oncall DBAORA”

VAR4=”Compress error – Qname= – contact oncall DBAORA”

VAR5=”Copy error – Qname= – contact oncall DBAORA”

VAR6=”ADSM error – Qname= – contact oncall DBAORA”

VAR7=”DB in hot bkup mode – Qname= – contact oncall DBAORA”

export COMMON_DIR ORACLE_HOME ORACLE_SID UID VAR1 VAR2 VAR3 VAR4 VAR5 VAR6 VAR7

export INITDF CONFIGDF ARCHIVE_DIR MGMTCLASS ADSM_SERVER RETPRD

export FROMOWNER FROMNODE ADSM_PROCESS_LIMIT SHUTDOWN_TIME_LIMIT

# ————————————————————————–

# ———– variables common to server ——————————

ARCHIVELOGDIR=/d801/oracle/$ORACLE_SID/archive # archived log dest

ARCHIVELOGSAVEDIR=$ARCHIVELOGDIR/save # dest to save archived

# logs that have been sent

# to TSM

ARCHIVELOGFMT=”arch_*_$ORACLE_SID.log” # archived log format

DSMPATH=/usr/bin # dsmc command path

# try:

# /usr/sbin for Solaris

# /usr/bin for AIX

STPATH=/usr/local/bin # sendtrap command path

TMP=/tmp # temp path

export ARCHIVELOGDIR ARCHIVELOGFMT

export DSMPATH STPATH TMP

# —————————————————————————–

# ————— variables common to archival processes ———————–

ARCH_JOB_LOGS=$ARCHIVE_DIR/joblogs # job log dest

ARCH_ADSM_LOGS=$ARCHIVE_DIR/adsmlogs # adsm log dest

ARCH_COMMON_SCRIPTS=$COMMON_DIR/common_scripts # common script dest

ARCH_RETR_SCRIPTS=$ARCHIVE_DIR/recv # adsm retr script dest

FILELST=$ARCH_JOB_LOGS # file list dest

WAIT_SLEEP=15 # number of seconds to sleep

# waiting for log switch

# to finish

MAX_WAIT_COUNT=20 # maximum number of times

# to wait WAIT_SLEEP seconds

# before terminating the

# archive backup

# with a return code of 8.

# Recommend

# WAIT_SLEEP X MAX_WAIT_COUNT

# less than 1 minute.

ARCH_LOGS_TO_KEEP=15 # number of archived redo

# logs to keep in the

# ARCHIVELOGSAVEDIR directory

# after archiving to ADSM

KEEP_UNITS=count # the meaning of

# ARCH_LOGS_TO_KEEP.

# Possible values are:

# days = number of days

# to keep them.

# count = number of

# files to keep.

VERSION=prod # versions of the script:

# prod = production

# devl = development

# qlty = quality

EXIT_0=”$ARCHIVE_DIR/scripts/exit_0.sh” # exit 0 routine – successful

EXIT_4=”$ARCHIVE_DIR/scripts/exit_4.sh” # exit 4 routine – warning

EXIT_8=”$ARCHIVE_DIR/scripts/exit_8.sh” # exit 8 routine – error

USER_EXIT_1=”$ARCHIVE_DIR/scripts/usrexit1.sh” # user exit script dest

USER_EXIT_2=”$ARCHIVE_DIR/scripts/usrexit2.sh”

USER_EXIT_3=”$ARCHIVE_DIR/scripts/usrexit3.sh”

USER_EXIT_4=”$ARCHIVE_DIR/scripts/usrexit4.sh”

JDATE=`date +%y%j`

CTIME=`date +%HH%MM%SS`

export EXIT_0 EXIT_4 EXIT_8

export USER_EXIT_1 USER_EXIT_2 USER_EXIT_3 USER_EXIT_4

export ARCH_JOB_LOGS ARCH_ADSM_LOGS ARCH_COMMON_SCRIPTS

export FILELST WAIT_SLEEP ARCH_LOGS_TO_KEEP KEEP_UNITS VERSION JDATE CTIME

export MAX_WAIT_COUNT

# ————————————————————————–

# ————– Main Section ——————————————-

. $ARCH_COMMON_SCRIPTS/arch_$VERSION.sh # read in common functions

# ————————————————————————–

# —– Uncomment the functions below to be executed ———————–

param_set # set parameters for function – required

#chk_dup_process # check to see if this process is still running

# required

db_status # check database status

#user_exit_1 # user exit 1

get_archnames # get archived log files to send to ADSM

switch_logfiles # alter system switch logfiles

sleep 10

#user_exit_2 # user exit 2

arch_log_list # record current active log list in joblog

df_to_adsm # archive archived log files directly to ADSM

#user_exit_3 # user exit 3

cleanup_archive_dir # remove old archive logs from save directory

#user_exit_4 # user exit 4

clean_up # clean up directorys

save_to_adsm # archive joblogs and

# filelist to ADSM

create_stats # generate ADSM statistics for this backup

exit_0 # Archive successful routine

Oracle : Script to Analyze schema for Unix






######################

ANALYZE SCHEMAS IN UNIX

#######################


#!/bin/ksh

ORACLE_SID=ORVIT8QA

export ORACLE_SID

ORACLE_HOME=/d001/oracle/9.2.0.8-64

export ORACLE_HOME

export EXPORT_FILE

LD_LIBRARY_PATH=/d001/oracle/9.2.0.8-64/bin:/d001/oracle/9.2.0.8-64/network/lib:/usr/openwin/lib:/usr/dt/lib

export LD_LIBRARY_PATH

PATH=/d001/oracle/9.2.0.8-64/bin:/bin:/usr/bin:/usr/sbin:/usr/ccs/bin:/usr/local/bin:/usr/ucb:/usr/openwin/bin:

export PATH

SQL_DIR=/d002/oracle/$ORACLE_SID/sql

export SQL_DIR

LOG_FILE=$SQL_DIR/gen_analyze_tables.log
export LOG_FILE

SQL_FILE=$SQL_DIR/gen_analyze_tables.sql
export SQL_FILE



rm $LOG_FILE



gen_analyze_tables.sql Script

______________________________

set termout off

set echo off

set feedback off

set heading off

set linesize 150

set pagesize 0

set space 0

spool analyze_tables.sql
select

‘analyze table ‘||owner||’.’||table_name||’ compute statistics;’

from dba_tables

where owner not like ‘SYS%’ and owner <> ‘MASTER_LOOKUP’ and owner <> ‘PRICING’

and table_name <> ‘MANAGETAX’

order by owner, table_name;

spool analyze_tables.log
@analyze_tables.sql
spool off

exit

Oracle : Query to find Active SQL’s in database


set feedback off

set serveroutput on size 9999

column username format a20

column sql_text format a55 word_wrapped

begin

for x in

(select username||'(‘||sid||’,’||serial#||’) ospid = ‘|| process ||

‘ program = ‘ || program username,

to_char(LOGON_TIME,’ Day HH24:MI’) logon_time,

to_char(sysdate,’ Day HH24:MI’) current_time,

sql_address,

sql_hash_value

from v$session

where status = ‘ACTIVE’

and rawtohex(sql_address) <> ’00’

and username is not null ) loop

for y in (select sql_text

from v$sqlarea

where address = x.sql_address ) loop

if ( y.sql_text not like ‘%listener.get_cmd%’ and

y.sql_text not like ‘%RAWTOHEX(SQL_ADDRESS)%’ ) then

dbms_output.put_line( ‘——————–‘ );

dbms_output.put_line( x.username );

dbms_output.put_line( x.logon_time || ‘ ‘ || x.current_time || ‘ SQL#=’ || x.sql_hash_value);

dbms_output.put_line( substr( y.sql_text, 1, 250 ) );

end if;

end loop;

end loop;

end;

/

Royal Challengers in final after de Villiers' rescue act





The Chinnaswamy surface - still good to bat on, but slower than usual - had torn up the script that the match had been expected to follow. There was no uncontrollable torrent of run scoring from either set of top-order batsmen. Gujarat Lions were 9 for 3 after being sent in. Then in the chase, Royal Challengers lost Virat Kohli for a duck and slipped to 29 for 5. Then, when Ravindra Jadeja had Stuart Binny lbw sweeping - though replays showed ball hitting pad marginally outside off stump - they were 68 for 6.Tumbling wickets. A high but not out-of-reach asking rate. One specialist batsman at the crease, with only the lower order for company. An atypically dry and grippy pitch provided the conditions for such a situation - usually more common in 50-overs cricket than in T20 - to arise in the first Qualifier of IPL 2016. AB de Villiers was the specialist batsman, and when Iqbal Abdulla joined him in the 10th over of Royal Challengers Bangalore's chase of 159, they needed 91 to win off 62 balls with four wickets in hand.
It began drizzling soon after Abdulla's entrance, with Royal Challengers needing 63 from 36 balls. De Villiers was at the non-striker's end. Abdulla, on 8 off 14, slashed at a gentle, back-of-a-length ball from Dwayne Smith, and missed. Kohli - who had struggled to contain his temper right through the game - gestured angrily from the dugout, telling Abdulla to take a single and give de Villiers the strike.
Abdulla steered the next ball to deep point. De Villiers, on 47, faced Smith now. He stepped down the pitch, Smith shortened his length, and a tennis-style flat-bat hit flew to the straight boundary. The next ball was fuller, and de Villiers miscued his lofted hit, skewing it high, with the outside half of his bat. It was a rare mis-hit in an innings of surface-defying fluency. It may have been caught at long-off in a bigger ground, but it cleared the leaping Aaron Finch in Bangalore.
It seemed like a sign. This would be de Villiers' day. On strike to the first ball of the next over, he shuffled across to off stump even before Shadab Jakati released his left-arm dart. Having covered the line, he quickly sunk to one knee and swung the ball away over the square leg boundary. When Abdulla swatted a mis-hit six of his own later in the over, Royal Challengers had the final in their sights, needing only 35 off 25. They got home with 10 balls left to play, with de Villiers having just enough time to unfurl a couple more spectacular shots, the pick of them a reverse-sweep off a Praveen Kumar delivery pitching outside leg stump. The win took Royal Challengers into the final, while Gujarat Lions will have another crack at it when they take on the winner of Wednesday's Eliminator between Sunrisers Hyderabad and Kolkata Knight Riders.
Abdulla played a key role with the ball too, dismissing Brendon McCullum and Aaron Finch in the second over of Lions' innings after Kohli had sent them in. Kohli may have used his left-arm spinner that early simply because two right-handers were opening for Lions, and two left-handers were waiting in the middle order, but he may also have observed that the pitch was unusually dry.
Whatever the case, he had extra cover on the rope and mid-off in the circle for McCullum, and the charging New Zealander failed to reach the pitch of the ball, and sharp turn forced him to slice wider than intended, into the hands of the fielder at long-off. Finch closed his bat-face too early three balls later, and the ball popped up to slip. When Shane Watson bounced Raina out in the fourth over, Lions were 9 for 3, sinking even before the contest had really begun.
Dwayne Smith had swapped batting slots with Finch in Lions' last game, against Mumbai Indians, and had looked in fluent touch while making a calm, unbeaten 37 to steer them home in a chase of 173. He struck the ball just as well here, in a more difficult situation, picking up a pair of boundaries off Abdulla early in his innings, sitting back and pouncing when he dropped marginally short, and following up with a hooked six off Chris Jordan.
But the effect of a poor Powerplay - Gujarat only made 23 in that period - rippled through the rest of their innings. Lions' run rate remained under six an over even after Smith and Dinesh Karthik plundered 16 off the 10th, bowled by Yuzvendra Chahal. It was still under seven when they tonked Abdulla for 17 in the 13th over. Karthik fell in the 14th, middling an attempted fine-leg scoop onto his leg stump, Ravindra Jadeja followed in the 16th, and Smith - having hit two more leg-side sixes in that time - holed out in the 18th.
The runs still kept coming, Watson conceding 21 and picking up two wickets in an incident-packed 19th, and Lions scored 100 in their last 10 overs.
A total of 158 still looked inadequate given Royal Challengers' batting strength, but Dhawal Kulkarni had run a battering ram through their top order within four overs of the chase. First, Kohli played on, trying to cut without moving his feet. Then Gayle, pushed back with a series of short balls, swung across the line of a slower ball and missed. Then came an all-format jaffa that swerved away from just short of a good length and induced KL Rahul to edge to slip.
Jadeja then got in the act, getting the ball to stop on Watson, who swatted across the line too soon. And when Sachin Baby slapped Kulkarni straight to short cover, Royal Challengers were gasping for air, the Powerplay not yet done. But they still had de Villiers.

Tuesday, May 24, 2016

Oracle : Alter system v Alter database


                            


ALTER SYSTEM is an Instance Level commands generally it applies for running processes, parameters etc where as ALTER DATABASE is a Database Level commands generally it applies to the physical structure of the database. Consider the RAC environment most of our ALTER SYSTEM command local to the instance (ALTER SYSTEM DUMP is an exceptional) and ALTER DATABASE command for the whole database.
Mostly we can use ALTER SYSTEM command when the database status is OPEN while Alter database we can use in MOUNT state.
In the sense of Auditing ALTER DATABSE command cannot be audited where as ALTER SYSTEM can.

In the struggling period of my career I always confused with these two related command. Even today it is difficult to remember related every command at the moment. The obvious ideas is that ALTER SYSTEM allows things to happen to the database whilst it is in use – flush shared pool, set a init.ora parameter,  switch archive log, kill session. They are all either non-database wide or non-intrusive database wide. By that I mean that killing a session is specific to that session and flushing shared pool does not harm everyone connected Let’s look at alter database and see if I can find any anomalies to this theory. The various clauses of startup, recovery, datafile, logfile, controlfile, standby database all fall in line. The only one that sits uncomfortably with this theory is the alter database parallel command. So in short if the situation you do not remember exact syntax then think if it affects every user and session on the database go for ALTER DATABASE, if it looks like it might be specific to a session or non-intrusive across all users then go for ALTER SYSTEM.

Finally Use the ALTER DATABASE statement to modify, maintain, or recover an existing and Use the ALTER SYSTEM statement to dynamically alter your Oracle Database instance. The settings stay in effect as long as the database is mounted.