Search This Blog

Tuesday, July 21, 2015

The funniest title for a MOS note ever? Using OEM and Cloud Control with 12c in multi-threaded mode

While you read this next paragraph, use the voice of an old west cowboy.

Its high noon in the OCC coral.  The boys are ridin' in, lookin' for a fight, and fate ain't gonna let them down.  Who'll walk away when the dust settles and who will be eatin' the dust...Oracle Cloud Control, or the 12c Database?

In a recent SR, MOS referenced a note talking about an incompatibility between the 12c Oracle agent and 12c's threaded architecture in Linux/Unix.  The MOS note I'm talking about is:
"Databases show down in Cloud Control when using Network Connection Pool, feature with 12.1.0.1 databases and 11.2 JDBC threaded_execution=true (Doc ID 1960485.1)"

In past posts I've talked about the performance and scalability virtues of 12c in a multi-threaded architecture...depending on your workload, you'll get increased caching, less CPU utilization and less memory utilization.  New connections are threads, not processes in Unix/Linux.  Other blogs have posts on this topic that stated a 30% performance improvement.  Its a beautiful thing.  In a recent 12c RAC upgrade I worked on, the feedback from the user testing called this feature, "The Turbo Button."

The incompatibility is because...for some reason, Oracle elected to go with the ancient 11.1.0.7 JDBC drivers in the 12c OEM agent, and they have no plans to improve that until OEM 13 is released.  The multi-threaded architecture is only functional for JDBC thin clients 11.2.0.1 and up.  So...when you try to use the latest, greatest OEM agent, besides things appearing falsely down in OEM Cloud Control, many very small trace files are created in the db's diag trace directory every second or so.  Very quickly your log destination will fill up.  The traces contain an error like:
...
Network protocol error on first data after new connect
Probable error (ORA-28546) in network administration.

To work around this problem, we implemented the same solution I showed in my previous post for Netbackup.  We have threaded_execution=true in the init.ora parameters and we have 2 listeners, one with DEDICATED_THROUGH_BROKER_LISTENER=ON  and the other with it off.  This makes the database a hybrid between threaded and traditional process architecture.  For things like Netbackup and OEM we send them to the listener that creates a traditional process.  For everything else, we use the threaded architecture.  Its the best of both worlds...performance of multithreaded architecture and backwards compatibility with ancient 11.1 jdbc drivers.

Almost everybody uses OEM for monitoring/alerting and soon every supported database will be on 12c.  I hope this helps you as you roll out your 12c implementation so you can complete it before January 2016 (when 12c becomes the only supported option.)  I mentioned this work around in the SR with the hope that MOS would share this with other people having this issue.  I never saw a western movie where the characters went to the corral for a gunfight, and went away shaking hands and working together, but in this "show down", with this work around, I think that's exactly what happened between 12c's multi-threaded architecture and the Cloud Control's OEM agent.  Yippy-ki-yeah.

LISTENER_SCAN2=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN2))))                # line added by Agent
LISTENER_SCAN3=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN3))))                # line added by Agent
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))            # line added by Agent
LISTENER_SCAN1=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1))))                # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1=ON                # line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1=OFF             # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON              # line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET                # line added by Agent


lsnr-no-thread =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS_LIST =
       (ADDRESS = (PROTOCOL = TCP)(HOST = hostbackupnetwork)(PORT = 1521))
      )
    )
  )

SID_LIST_no-thread=
  (SID_LIST =
    (SID_DESC =
      (SID_NAME = remedyp2)
      (ORACLE_HOME = /u01/app/oracle/product/12.1.0/dbhome_1)
    )
  )


DEDICATED_THROUGH_BROKER_lsnr-no-thread=OFF
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN3=ON                # line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN3=OFF             # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN2=ON                # line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN2=OFF             # line added by Agent

Monday, July 6, 2015

Splitting out individual db CPU from a consolidated server

I remember an English class in high school where we were assigned a term paper.  When somebody asked our teacher (Mrs Theile) how long it should be, she replied, "A term paper should be like a dress.  Long enough to cover the topic but short enough to keep you interested."  It occurred to me today that term papers are also like virtual CPU's for databases.  Recently a DBA friend told me about a scenario where he's migrating several databases from a physical server to multiple virtual servers.  The motivations for doing this are that:

  • Today, multiple business units have to agree on when to take an outage whenever there are OS patches that need to be applied.  Coordination is a nightmare.  The new environment will have each database running in its own OS environment, so this will no longer be an issue.
  • The existing hardware is very old and ready to end its life cycle.

He asked for advice to size the VM's.  He wanted to know how much storage capacity, RAM and CPU to tell the virtual team each VM needs for each database.

For storage capacity, he looked at dba_segments, v$logfile and v$tempfile.  He took several days of redo generation to size the FRA capacity.  Using the advice views (ie: v$db_cache_advice, etc) he was able to gather how much ram each db needed to optimize logical caching in the VM's for each memory pool.  That was all straight forward.  The more difficult task is to figure out how much CPU each db within the server typically needs.

When he looks at system-related monitoring tools, they show the total amount of cpu used on the system, not broken out by database...so, that's not very helpful.  In his situation he has 10 databases per physical server.  Like the term paper, there's a delicate balance between having too much cpu and not having enough.  There's really no way to quantify what "too much" is because its expected that under extremely rare situations, you're going to peg your CPU.  How often that's acceptable depends on your SLA's.  Also, its important to keep in mind CPU scheduling in VMware will slow down the db if you overallocate CPU. (I mentioned this in point #2 in a previous post)

If you think of a busy core as CPU time divided by 1000 ms, then the WRH$ tables have the solution for us.  There are two potential (but probably very uncommon) pitfalls. 

1. The cpu time is recorded in increments of 10ms.  If your core is busy for less than 10ms, it gets counted as 0. 

2. Strange things happen to system stats when 100% of the CPU is used.  At that point, you have to consider the stats unreliable.  For our purposes, this should rarely happen since we're going to look at an average over a snap period (by default, 1 hr)  so I filtered out that data.  If its common for you to run at 100% cpu utilization, this query won't help you determine how much cpu is sufficient.

So...granted that sometimes we're going to peg the CPU, how many vCPU would fulfill my SLA? If you have a very cyclical usage model for your database...ie: you're a retail shop and you're only busy 2 days after Thanksgiving, and the day before and after Christmas.  On those days you could increase the CPU as needed (because you're in a virtual machine or a cloud), but you want it to be sufficient the rest of the time.  Let's say those peak times total 4 days.  Take the total seconds in the 4 days divided by the total seconds in a year:

60 sec/min*60min/hr*24hr/day=86400 seconds per day
86400 sec/day*4 peak days=345,600 sec of peak time
86400 sec/day*365day/yr=31,536,000 sec/yr
1-(345,600/31,536,000)=.989

...so you would need your SLA to be at least .989.  Using the query below you could see that the SLA historically could be met with 17 cores.  I like even numbers to help VMWare cpu scheduling (ideally it would be a power of 2...but that's a different post), so I'd bump it up to 18.  So...with 18 cores, we'd have enough CPU 99.06684% of the time.  The rest of the time would fall under those peak days where we'd temporarily increase the vCPU as needed.


CORES_USEDSECONDS_AT_CORE_USEDSECONDS_AT_OR_BELOW_CORE_USEDTOTAL_SECONDS_IN_SNAPSSLA_ACHIEVED
1800914944369819719619276822.27309
24246642812378969119619276863.09595
31609381516625612019619276884.74121
4553107318234993519619276892.94427
5217286218788100819619276895.76347
676994619005387119619276896.87099
750137319082381719619276897.26343
827848819132519119619276897.51898
923237419160367919619276897.66093
1033689819183605419619276897.77937
1135028719217295319619276897.95109
1228078419252324119619276898.12963
1327068319280402519619276898.27275
1438887619307470819619276898.41072
1533458819346358519619276898.60893
1627368919379817319619276898.77947
1729010919407186219619276898.91897
1829021819436197219619276899.06684
1933504219465219019619276899.21476
2021596519498723319619276899.38554
2126093119520319819619276899.49561
2224626719546413019619276899.62861
2328795519571039719619276899.75413
2419441519599835219619276899.90091


with cpu_info as
(
select /*+ PARALLEL 8 */ sum(seconds) weight, sum(seconds), cpups cores_used from (
select s2.snap_id snap_1, s2.snap_id snap_2, to_date(to_char(begin_interval_time,'MM/DD/YYYY HH24'),'MM/DD/YYYY HH24') sample_snap,
  s2.cpu-s1.cpu cpu, round(trunc((s2.cpu-s1.cpu)/seconds)/100) cpups, seconds
from (
select hiof1.snap_id, hiof1.value cpu
from sys.WRH$_SYSSTAT HIOF1
where HIOF1.stat_id = (select stat_id from v$statname where name = 'CPU used by this session')
) s1, (
select hiof1.snap_id, hiof1.value cpu
from sys.WRH$_SYSSTAT HIOF1
where HIOF1.stat_id = (select stat_id from v$statname where name = 'CPU used by this session')
) s2, (
select snap_id, instance_number, begin_interval_time, end_interval_time,
 extract(second from (end_interval_time-begin_interval_time))+
 (extract(minute from (end_interval_time-begin_interval_time))*60)+
 (extract(hour from (end_interval_time-begin_interval_time))*60*60) seconds
 from dba_hist_snapshot
)
 ms
where s1.snap_id=ms.snap_id
  and s1.snap_id=(s2.snap_id-1)
  and (s2.cpu-s1.cpu)>1
  and (round(trunc((s2.cpu-s1.cpu)/seconds)/100))<=(select sum(p1.value)*p2.value
                                                      from gv$parameter p1, v$parameter p2
                                                      where p1.name='cpu_count'
                                                        and p2.name='parallel_threads_per_cpu'
                                                      group by p2.value)
) group by cpups
)
select ci1.cores_used, trunc(ci1.weight) seconds_at_core_used, trunc(sum(ci2.weight)) seconds_at_or_below_core_used, trunc(ci3.weight) total_seconds_in_snaps, round(100*(sum(ci2.weight)/ci3.weight),5) SLA_Achieved from cpu_info ci1, cpu_info ci2, (select sum(weight) weight from cpu_info) ci3
where ci2.cores_used+1<=ci1.cores_used
group by ci1.weight, ci1.cores_used, ci3.weight
order by 1; 



Wednesday, March 25, 2015

Oracle 12c threaded_execution and netbackup

If you haven't started migrating your databases to 12c, you better start!  The last patchset of 11g runs out of free extended support at the end of January '16 and 11.2.0.3 on 8/27/2015.  Besides, 12c is a huge step forward for Oracle, if you take advantage of the new features.  You COULD just install the db and run it just like it was version 7.3 and not take advantage of its features...that would be easier, but you'd be doing the company you work for a disservice....besides...new features are what's fun about being a DBA!

If you've spent any time at all looking at the new features in 12c, you've probably come across "threaded_execution."  It essentially makes all connections to the databases (and most background "processes") threads (aka lightweight processes) instead of process in linux.  The advantage is an in-process library call rather than a cpu call when the OS need to switch between them.  Also, memory that didn't used to be "shareable" between processes is now shared.  In my performance tests you could see a measured improvement in performance, but the big advantage I found was in scalability.  There are lots of post talking about its performance and memory benefits.  This is one of the rare simple changes you can make where the user will notice a quicker system.  IMHO its something non-standard that should be a standard...like how hugepages should always be used, for instance.

Usually, when you do a ps -ef|grep ANDY (my instance name is ANDY) you'd see MANY oracle background processes.  When threaded execution is set to true, most of those processes are now threads...so now when you do a ps -ef|grep ANDY you only see these 6:

oracle   48460     1  0 14:08 ?        00:00:00 ora_pmon_ANDY
oracle   48462     1  0 14:08 ?        00:00:01 ora_psp0_ANDY
oracle   48464     1  4 14:08 ?        00:03:03 ora_vktm_ANDY
oracle   48468     1  0 14:08 ?        00:00:12 ora_u004_ANDY
oracle   48482     1  0 14:08 ?        00:00:08 ora_dbw0_ANDY
oracle   50665     1  1 14:10 ?        00:00:57 ora_u005_ANDY

Like the great philosopher Bon Jovi once said, "Every rose has its thorn."

The main problem with the threaded execution architecture in Linux is that OS authentication no longer works.  (ie:no more sqlplus "/as sysdba")  I've been able to take advantage of other new 12c features to compensate for that in our scripting, but its kind of a pain.  Even so, its worth it to get the most out of the system.  Besides, you could argue that no OS Authentication could mean better security.

While doing extensive testing with this feature I hit a problem.  Although Symantec supports 12c in 7.6.0.2+, there was a problem where the environment variables passed in with my allocate channel commands weren't getting to Netbackup.

run
{
  allocate channel t1 DEVICE TYPE SBT parms 'BLKSIZE=4194304, ENV=(NB_ORA_POLICY=Oracle_Master_1)';
  allocate channel t2 DEVICE TYPE SBT parms 'BLKSIZE=4194304, ENV=(NB_ORA_POLICY=Oracle_Master_2)';
  allocate channel t3 DEVICE TYPE SBT parms 'BLKSIZE=4194304, ENV=(NB_ORA_POLICY=Oracle_Master_3)';
  backup filesperset 8 database format '%d_%U';
}

In the script above, NB_ORA_POLICY was being reported as Oracle_Master_1 for each channel in the NB logs and on the NB console.  The other 2 channels would eventually error out and the backup would complete VERY slowly on the one remaining channel.  If I set threaded_execution=false, NB worked fine on all 3 channels as usual.  I searched the internet for a solution and came up with nothing...I created an SR with Oracle...the response was "Contact your MML vendor."  I created a ticket with Symantec who escalated it to their Engineering group and I worked closely with a great on-site Symantec consultant, but we weren't able to get it to work.  The Symantec guys aren't familiar with the new architecture, so as I explained to them how connections come in first to the listener, then a thread is created for the user...*BAM* the solution hit me:

THE FIX:
Make RMAN use processes, make everything else use threads.

For threaded execution to make threaded connections, you have to add a line to your listener.ora file:

DEDICATED_THROUGH_BROKER_[listener_name] = ON

I already had a normal listener for TCP connections...I created a new (2nd) static listener using IPC and set "dedicated_through_broker" to OFF (the default) for that listener, then I created the entry in my db's tnsnames.ora.

In listener.ora:

lsnr-rman =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS_LIST =
       (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROCRMAN))
      )
    )
  )

SID_LIST_lsnr-rman=
  (SID_LIST =
    (SID_DESC =
      (SID_NAME = ANDY)
      (ORACLE_HOME = /u01/app/oracle/product/12.1.0/db_2)
    )

  )

DEDICATED_THROUGH_BROKER_lsnr_rman= OFF

...then I started the new listener:

lsnrctl start lsnr-rman

Then I added the alias in tnsnames.ora:

rman =
  (DESCRIPTION=

     (ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROCANDY))(CONNECT_DATA=(SID=ANDY))(HS=))

The last thing I had to do was change the backup script to use the connect string...usually used to connect to remote nodes on RAC to load balance the RAC backups:

run
{
  allocate channel t1 DEVICE TYPE SBT connect rman_user/changeme@rman parms 'BLKSIZE=4194304, ENV=(NB_ORA_POLICY=Oracle_Master_1)';
  allocate channel t2 DEVICE TYPE SBT connect rman_user/changeme@rman parms 'BLKSIZE=4194304, ENV=(NB_ORA_POLICY=Oracle_Master_2)';
  allocate channel t3 DEVICE TYPE SBT connect rman_user/changeme@rman parms 'BLKSIZE=4194304, ENV=(NB_ORA_POLICY=Oracle_Master_3)';
  backup filesperset 1 database format '%d_%U';

}

When I kicked off the backup, I could see the new PROCESSES were created and connected to my threaded_execution db:

oracle   48460     1  0 14:08 ?        00:00:00 ora_pmon_ANDY
oracle   48462     1  0 14:08 ?        00:00:01 ora_psp0_ANDY
oracle   48464     1  4 14:08 ?        00:03:03 ora_vktm_ANDY
oracle   48468     1  0 14:08 ?        00:00:12 ora_u004_ANDY
oracle   48482     1  0 14:08 ?        00:00:08 ora_dbw0_ANDY
oracle   50665     1  1 14:10 ?        00:00:57 ora_u005_ANDY
oracle   56718     1  3 15:18 ?        00:00:00 oracleANDY (LOCAL=NO)
oracle   56720     1  3 15:18 ?        00:00:00 oracleANDY (LOCAL=NO)
oracle   56722     1  1 15:18 ?        00:00:00 oracleANDY (LOCAL=NO)

oracle   56763 36022  0 15:18 pts/2    00:00:00 grep ANDY

...and now the backup is working as expected.  The two downsides I see to this is that:

1. You can't use sysbackup privs to run your backup...to connect this way you seem to need sysdba privs.
2. You have to have the init.ora parameter backup_tape_io_slaves=false...which I always usually set to true to make backups more efficient.

I suspect this isn't a Netbackup issue, but an issue with the way allocate channel commands pass env variables in the new architecture.  After all, they're process env variables and all the allocate channel commands are using the same process (just different threads.)  Whatever you're using to backup your database...if you're using the great new feature threaded_execution, I hope you find this post useful. :)


I recently had a similar issue with the 12c OEM agent which uses 11.1.0.7 thin jdbc connections to the database.  I used the same work around...here are the details.

Tuesday, March 10, 2015

Constructing an effective IT team

Margy Ross (President of the Kimball Group, founded by the father of DW) wrote a great article called "Risky Project Resources are Risky Business."  She was focused specifically on DW/BI projects, but I think her article applies to all effective teams, and I want to pass it on. 

Risky Project Resources are Risky Business

Over the years, we’ve worked with countless exemplary DW/BI project team members: smart, skilled, dedicated, and motivated, coupled with a healthy dose of mutual trust, respect, and camaraderie with their teammates. Teams with members who possess these characteristics tend to fire on all cylinders, with the resulting whole often greater than the sum of the parts. But we’ve also run into risky project resources; in addition to being individual non-contributors, they can undermine the effectiveness of the entire DW/BI team. Model team members often become short-timers if the team is stacked with unproductive non-performers. We hope your team doesn’t include resources that resemble the following profiles:
  • Obstructionist debaters are perpetual naysayers who find fault with everything and get more satisfaction from the process of debating than the process of delivering.
  • Heat seekers who are always anxious to try the latest, greatest technical gadgets and gizmos regardless of whether they align with the DW/BI project’s objectives.
  • Cookie cutters continue to do exactly what’s worked for them in the past, regardless of their latest assignment’s nuances.
  • Weed dwellers lose sight of the forest from the trees, focusing exclusively on the nitty-gritty details without regard to the bigger picture.
  • Perpetual students and researchers want to read, read, and then read some more, but are disinclined to ever take action because there’s always more to learn.
  • Independent spirits march to their own drummer without regard to rules, standards or accepted best practices.
  • Honesty dodgers and problem hiders are always nodding “yes” and saying “no problem,” even when serious issues are lurking just around the corner.
  • Dysfunctional incompetents and mental retirees are checked out and unable to perform.
  • Self-declared “know it all” experts don’t need to listen because they already have all the answers – just ask them!
  • Threatened worriers are so paralyzed with fear about what might happen that they respond by doing nothing at all.
Of course, even with superstar teammates, the right team leadership is also necessary. Hopefully your DW/BI project/program manager fits the following bill:
  • Establishes a partnership with the business, including joint ownership for the DW/BI project/program, in part because they’re respected by the business as being user-oriented rather than technology-focused.
  • Demonstrates excellent interpersonal and organizational skills since the DW/BI project/program is a political and cultural animal.
  • Recruits and retains resources with the talent to deliver, gets them operating cohesively from a common playbook, and understands that adding more mediocre players won’t increase the team’s chances of winning. Conversely, they also spot individuals who are slowing down the effort and proactively counsel them (or at least minimize the risk of project derailment.)
  • Listens keenly, plus communicates effectively and honestly, setting appropriate expectations and having the courage to say “no” when necessary.
  • Optimally possesses some DW/BI domain expertise, in addition to strong project management skills. At a minimum, they’re staying one chapter ahead of the project team in The Data Warehouse Lifecycle Toolkit
  • Understands that DW/BI success is directly tied to business acceptance. Period.