|
Replies:
14
-
Pages:
1
-
Last Post:
2009/03/24 6:55
Last Post By: Mark_T
|
|
|
Posts:
257
Registered:
05/05/06
|
|
|
100% CPU Usage Overhead running EM DBConsole 11g on OEL-5.2
Posted:
2008/06/10 15:37
|
|
|
After upgrading to OEL-5.2 and relinking all Oracle binaries, my old Oracle 11g installation, installed several months before on OEL-5.1, has been working well, including Enterprise Manager Database Console working nicely as always with respectful performance. Unfortunatelly, it lasted just several days.
Yesterday I decided to uninstall the 11g completely and perform new clean installation (software and database) with the same configuration options and settings as before, including EM dbconsole, all configured using dbca. After completing the installation (EM was started automatically by dbca), oracle continued to suck 80-85% CPU time. In further few minutes CPU utilization raised up to 99% due to only one (always the same PID) client process - "oracleorcl (LOCAL=NO)". For first ten minutes I didn't care too much since I always enable Automatic Management in dbca. But after two hours, I started to worry. The process was still running, consuming sustained 99% of CPU power. No other system activity, no database activity, no disks activity at all!
I was really puzzled since I installed and reinstalled the 11g at least 20 times on OEL-5.0 and 5.1, experimenting with ASM, raw devices, loopback devices and various combinations of installation options, but never experienced such a behaviour. It took me 3 minutes to log in to EM dbconsole as it was almost unusable performing too slow. After three hours CPU temperature was nearly 60 degrees celsius. I decided to shutdown EM and after that everything became quiet. Oracle was running normally. Started EM again, the problem was back again. Tracing enabled, it filled a 350 MB trace file in just 20 minutes. Reinstalling the software and database once again didn't help. Whenever EM is up, the CPU usage overhead of 99% persists.
Here is a cca 23 minutes session summary report taken from EM dbconsole's Performance page. The trace file is too big to list it here, but it shows the same.
Host CPU: 100%
Active Sessions: 100%
The details for the Selected 5 Minute Interval (the last 5 min interval) are shown as follow:
TOP SESSIONS: SYSMAN, Program: OMS
Activity: 100%
TOP MODULES: OEM.CacheModeWaitPool, Service: orcl
Activity: 100%
TOP CLIENT: Unnamed
Activity: 99.1%
TOP ACTIONS: Unnamed (OEM.CacheModeWaitPool) (orcl)
Activity: 100%
TOP OBJECTS: SYSMAN.MGMT_JOB_EXEC_SUMMARY (Table)
Activity: 100%
TOP PL/SQL: SYSMAN.MGMT_JOB_ENGINE.INSERT_EXECUTION
PL/SQL Source: SYSMAN.MGMT_JOB_ENGINE
Line Number: 7135
Activity: 100%
TOP SQL: SELECT EXECUTION_ID, STATUS, STATUS_DETAIL FROM MGMT_JOB_EXEC_SUMMARY
WHERE JOB_ID = :B3 AND TARGET_LIST_INDEX = :B2 AND EXPECTED_START_TIME = :B1;
Activity: 100%
STATISTICS SUMMARY
cca 23 minutes session
with no other system activity
Per
Total Execution Per Row
Executions 105,103 1 10,510.30
Elapsed Time (sec) 1,358.95 0.01 135.90
CPU Time (sec) 1,070.42 0.01 107.04
Buffer Gets 85,585,518 814.30 8,558,551.80
Disk Reads 2 <0.01 0.20
Direct Writes 0 0.00 0.00
Rows 10 <0.01 1
Fetches 105,103 1.00 10,510.30
Wow!!! Note: no disk, no database activity !
Has anyone experienced this or similar behaviour after clean 11g installation on OEL-5.2? If not, anyone has a clue what the hell is going on?
Thanks in advance.
|
|
|
Posts:
1,253
Registered:
06/21/07
|
|
|
Re: 100% CPU Usage Overhead running EM DBConsole 11g on OEL-5.2
Posted:
2008/06/11 6:25
in response to: NJ
|
|
|
The next time this happens, please post the output of:
$ cat /proc/meminfo
$ cat /proc/slabinfo
your system may be having trouble getting out of its own way.
|
|
|
Posts:
257
Registered:
05/05/06
|
|
|
Re: 100% CPU Usage Overhead running EM DBConsole 11g on OEL-5.2
Posted:
2008/06/11 13:49
in response to: TommyReynolds
|
|
|
Sorry for this delay, you know - working, watching EURO-2008 etc.
The next time this happens, please post the output of:
$ cat /proc/meminfo
$ cat /proc/slabinfo
Currently I'm not able to do this because in the meantime I "fixed" the problem and cannot reproduce it again anymore (reverse it back to high CPU utilization).
I was messing around $ORACLE_HOME/sysman directory and SYSMAN schema tables, views and packages and, querying the view MGMT$JOB_STEP_HISTORY, found out that the job PROVISIONING DAEMON of type PAFDaemonJob (a job poller for PAF jobs) is continuously running. Investigating further the tables MGMT_EXEC_SUMMARY and MGMT_JOB, as well as the packages MGMT_JOB_ENGINE and MGMT_PAF_UTL, I stopped the Enterprise Manager and executed:
SYSMAN> execute MGMT_PAF_UTL.STOP_DAEMON
PL/SQL procedure successfully completed.
and the row for PROVISIONING DAEMON in MGMT_JOB table has been deleted by the procedure. After starting Enterprise Manager again, I realized "the problem has gone". Really, it rocks. No more CPU usage overhead. Unfortunatelly I cannot reproduce the problem anymore. When I start the daemon while EM dbconsole is running by executing
SYSMAN> execute MGMT_PAF_UTL.START_DAEMON
PL/SQL procedure successfully completed.
the row for PROVISIONING DAEMON is restored back in the MGMT_JOB table, but nothing happens.
If you know the way how to activate this daemon and reproduce CPU usage overhead again, please let me know and I'll post the outputs you required. Or dropping the database and create it again (dbca) is the only way?
|
|
|
Posts:
257
Registered:
05/05/06
|
|
|
Re: 100% CPU Usage Overhead running EM DBConsole 11g on OEL-5.2
Posted:
2008/06/13 15:13
in response to: TommyReynolds
|
|
|
Hi Tommy,
I didn't want to experiment further with already working OEL-5.2, oracle and dbconsole on this machine, specially not after googling the problem and finding out that I am not alone in this world. There are another two threads on OTN forums (Database General) showing the same problem even on 2GB machines:
http://forums.oracle.com/forums/thread.jspa?threadID=621165&start=0&tstart=0
http://forums.oracle.com/forums/message.jspa?messageID=2516573
So, I took another, a smaller free machine I've got at home (1GB RAM, 2.2MHz Pentium4, three 80GB disks), on which I used to experiment with new releases of software (this is the machine on which I installed 11g for the first time when it was released on OEL-5.0, and I can recall that everything was OK with EM). This is what I did:
1. I installed OEL-5.0 on the machine, adjusted linux and kernel parameters, and performed full 11g installation. Database and EM dbconsole worked nice with acceptable performance. Without activity in the database, %CPU = zero !!! The whole system was perfectly quiet.
2. Since everything was OK, I shutdown EM and oracle, and performed the full upgrade to OEL-5.2. When the upgrade finished, restarted the system, relinked all oracle binaries, and started oracle and EM dbconsole. Both worked perfectly again, just as before the upgrade. I repeated restarting the database and dbconsole several times, always with the same result - it really rocks. Without database activity, %CPU = zero%.
3. Using dbca, I dropped the database and created the new one with the same configuration options. Wow! I'm again in trouble. A half an hour after the creation of the database, %CPU raised up to 99%. That's it.
The crucial question here is: what is that in OEL-5.2, not existing in the 5.0, that causes dbca/em scripts to be embarrassed at the time of EM agent configuration?
Here are the outputs you required picked 30 minutes after starting the database and EM dbconsole (sustained 99% CPU utilization). Note that this is just a 1GB machine.
Kernel command line: ro root=LABEL=/ elevator=deadline rhgb quiet
[root@localhost ~]# cat /proc/meminfo
MemTotal: 1034576 kB
MemFree: 27356 kB
Buffers: 8388 kB
Cached: 609660 kB
SwapCached: 18628 kB
Active: 675376 kB
Inactive: 287072 kB
HighTotal: 130304 kB
HighFree: 260 kB
LowTotal: 904272 kB
LowFree: 27096 kB
SwapTotal: 3148700 kB
SwapFree: 2940636 kB
Dirty: 72 kB
Writeback: 0 kB
AnonPages: 328700 kB
Mapped: 271316 kB
Slab: 21136 kB
PageTables: 14196 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 3665988 kB
Committed_AS: 1187464 kB
VmallocTotal: 114680 kB
VmallocUsed: 5860 kB
VmallocChunk: 108476 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
Hugepagesize: 4096 kB
[root@localhost ~]# cat /proc/slabinfo
slabinfo - version: 2.1
# name <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab> : tunables <limit> <batchcount> <sharedfactor> : slabdata <active_slabs> <num_slabs> <sharedavail>
rpc_buffers 8 8 2048 2 1 : tunables 24 12 8 : slabdata 4 4 0
rpc_tasks 8 15 256 15 1 : tunables 120 60 8 : slabdata 1 1 0
rpc_inode_cache 6 7 512 7 1 : tunables 54 27 8 : slabdata 1 1 0
ip_conntrack_expect 0 0 96 40 1 : tunables 120 60 8 : slabdata 0 0 0
ip_conntrack 68 68 228 17 1 : tunables 120 60 8 : slabdata 4 4 0
ip_fib_alias 7 113 32 113 1 : tunables 120 60 8 : slabdata 1 1 0
ip_fib_hash 7 113 32 113 1 : tunables 120 60 8 : slabdata 1 1 0
fib6_nodes 22 113 32 113 1 : tunables 120 60 8 : slabdata 1 1 0
ip6_dst_cache 13 15 256 15 1 : tunables 120 60 8 : slabdata 1 1 0
ndisc_cache 1 15 256 15 1 : tunables 120 60 8 : slabdata 1 1 0
RAWv6 4 5 768 5 1 : tunables 54 27 8 : slabdata 1 1 0
UDPv6 9 12 640 6 1 : tunables 54 27 8 : slabdata 2 2 0
tw_sock_TCPv6 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
request_sock_TCPv6 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
TCPv6 1 3 1280 3 1 : tunables 24 12 8 : slabdata 1 1 0
jbd_1k 0 0 1024 4 1 : tunables 54 27 8 : slabdata 0 0 0
dm_mpath 0 0 28 127 1 : tunables 120 60 8 : slabdata 0 0 0
dm_uevent 0 0 2460 3 2 : tunables 24 12 8 : slabdata 0 0 0
dm_tio 0 0 16 203 1 : tunables 120 60 8 : slabdata 0 0 0
dm_io 0 0 20 169 1 : tunables 120 60 8 : slabdata 0 0 0
jbd_4k 1 1 4096 1 1 : tunables 24 12 8 : slabdata 1 1 0
scsi_cmd_cache 10 10 384 10 1 : tunables 54 27 8 : slabdata 1 1 0
sgpool-128 36 36 2048 2 1 : tunables 24 12 8 : slabdata 18 18 0
sgpool-64 33 36 1024 4 1 : tunables 54 27 8 : slabdata 9 9 0
sgpool-32 34 40 512 8 1 : tunables 54 27 8 : slabdata 5 5 0
sgpool-16 35 45 256 15 1 : tunables 120 60 8 : slabdata 3 3 0
sgpool-8 60 60 128 30 1 : tunables 120 60 8 : slabdata 2 2 0
scsi_io_context 0 0 104 37 1 : tunables 120 60 8 : slabdata 0 0 0
ext3_inode_cache 4376 8216 492 8 1 : tunables 54 27 8 : slabdata 1027 1027 0
ext3_xattr 165 234 48 78 1 : tunables 120 60 8 : slabdata 3 3 0
journal_handle 8 169 20 169 1 : tunables 120 60 8 : slabdata 1 1 0
journal_head 684 1008 52 72 1 : tunables 120 60 8 : slabdata 14 14 0
revoke_table 18 254 12 254 1 : tunables 120 60 8 : slabdata 1 1 0
revoke_record 0 0 16 203 1 : tunables 120 60 8 : slabdata 0 0 0
uhci_urb_priv 0 0 28 127 1 : tunables 120 60 8 : slabdata 0 0 0
UNIX 56 112 512 7 1 : tunables 54 27 8 : slabdata 16 16 0
flow_cache 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
cfq_ioc_pool 0 0 92 42 1 : tunables 120 60 8 : slabdata 0 0 0
cfq_pool 0 0 96 40 1 : tunables 120 60 8 : slabdata 0 0 0
crq_pool 0 0 44 84 1 : tunables 120 60 8 : slabdata 0 0 0
deadline_drq 140 252 44 84 1 : tunables 120 60 8 : slabdata 3 3 0
as_arq 0 0 56 67 1 : tunables 120 60 8 : slabdata 0 0 0
mqueue_inode_cache 1 6 640 6 1 : tunables 54 27 8 : slabdata 1 1 0
isofs_inode_cache 0 0 368 10 1 : tunables 54 27 8 : slabdata 0 0 0
hugetlbfs_inode_cache 1 11 340 11 1 : tunables 54 27 8 : slabdata 1 1 0
ext2_inode_cache 0 0 476 8 1 : tunables 54 27 8 : slabdata 0 0 0
ext2_xattr 0 0 48 78 1 : tunables 120 60 8 : slabdata 0 0 0
dnotify_cache 2 169 20 169 1 : tunables 120 60 8 : slabdata 1 1 0
dquot 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
eventpoll_pwq 1 101 36 101 1 : tunables 120 60 8 : slabdata 1 1 0
eventpoll_epi 1 30 128 30 1 : tunables 120 60 8 : slabdata 1 1 0
inotify_event_cache 1 127 28 127 1 : tunables 120 60 8 : slabdata 1 1 0
inotify_watch_cache 23 92 40 92 1 : tunables 120 60 8 : slabdata 1 1 0
kioctx 135 135 256 15 1 : tunables 120 60 8 : slabdata 9 9 0
kiocb 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
fasync_cache 0 0 16 203 1 : tunables 120 60 8 : slabdata 0 0 0
shmem_inode_cache 553 585 436 9 1 : tunables 54 27 8 : slabdata 65 65 0
posix_timers_cache 0 0 88 44 1 : tunables 120 60 8 : slabdata 0 0 0
uid_cache 5 59 64 59 1 : tunables 120 60 8 : slabdata 1 1 0
ip_mrt_cache 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
tcp_bind_bucket 32 203 16 203 1 : tunables 120 60 8 : slabdata 1 1 0
inet_peer_cache 1 59 64 59 1 : tunables 120 60 8 : slabdata 1 1 0
secpath_cache 0 0 32 113 1 : tunables 120 60 8 : slabdata 0 0 0
xfrm_dst_cache 0 0 384 10 1 : tunables 54 27 8 : slabdata 0 0 0
ip_dst_cache 6 15 256 15 1 : tunables 120 60 8 : slabdata 1 1 0
arp_cache 2 15 256 15 1 : tunables 120 60 8 : slabdata 1 1 0
RAW 2 7 512 7 1 : tunables 54 27 8 : slabdata 1 1 0
UDP 3 7 512 7 1 : tunables 54 27 8 : slabdata 1 1 0
tw_sock_TCP 3 30 128 30 1 : tunables 120 60 8 : slabdata 1 1 0
request_sock_TCP 4 30 128 30 1 : tunables 120 60 8 : slabdata 1 1 0
TCP 43 49 1152 7 2 : tunables 24 12 8 : slabdata 7 7 0
blkdev_ioc 3 127 28 127 1 : tunables 120 60 8 : slabdata 1 1 0
blkdev_queue 23 24 956 4 1 : tunables 54 27 8 : slabdata 6 6 0
blkdev_requests 137 161 172 23 1 : tunables 120 60 8 : slabdata 7 7 0
biovec-256 7 8 3072 2 2 : tunables 24 12 8 : slabdata 4 4 0
biovec-128 7 10 1536 5 2 : tunables 24 12 8 : slabdata 2 2 0
biovec-64 7 10 768 5 1 : tunables 54 27 8 : slabdata 2 2 0
biovec-16 7 15 256 15 1 : tunables 120 60 8 : slabdata 1 1 0
biovec-4 8 59 64 59 1 : tunables 120 60 8 : slabdata 1 1 0
biovec-1 406 406 16 203 1 : tunables 120 60 8 : slabdata 2 2 300
bio 564 660 128 30 1 : tunables 120 60 8 : slabdata 21 22 204
utrace_engine_cache 0 0 32 113 1 : tunables 120 60 8 : slabdata 0 0 0
utrace_cache 0 0 32 113 1 : tunables 120 60 8 : slabdata 0 0 0
sock_inode_cache 149 230 384 10 1 : tunables 54 27 8 : slabdata 23 23 0
skbuff_fclone_cache 20 20 384 10 1 : tunables 54 27 8 : slabdata 2 2 0
skbuff_head_cache 86 210 256 15 1 : tunables 120 60 8 : slabdata 14 14 0
file_lock_cache 22 40 96 40 1 : tunables 120 60 8 : slabdata 1 1 0
Acpi-Operand 1147 1196 40 92 1 : tunables 120 60 8 : slabdata 13 13 0
Acpi-ParseExt 0 0 44 84 1 : tunables 120 60 8 : slabdata 0 0 0
Acpi-Parse 0 0 28 127 1 : tunables 120 60 8 : slabdata 0 0 0
Acpi-State 0 0 44 84 1 : tunables 120 60 8 : slabdata 0 0 0
Acpi-Namespace 615 676 20 169 1 : tunables 120 60 8 : slabdata 4 4 0
delayacct_cache 233 312 48 78 1 : tunables 120 60 8 : slabdata 4 4 0
taskstats_cache 12 53 72 53 1 : tunables 120 60 8 : slabdata 1 1 0
proc_inode_cache 622 693 356 11 1 : tunables 54 27 8 : slabdata 63 63 0
sigqueue 8 27 144 27 1 : tunables 120 60 8 : slabdata 1 1 0
radix_tree_node 6220 8134 276 14 1 : tunables 54 27 8 : slabdata 581 581 0
bdev_cache 37 42 512 7 1 : tunables 54 27 8 : slabdata 6 6 0
sysfs_dir_cache 4980 4992 48 78 1 : tunables 120 60 8 : slabdata 64 64 0
mnt_cache 36 60 128 30 1 : tunables 120 60 8 : slabdata 2 2 0
inode_cache 1113 1254 340 11 1 : tunables 54 27 8 : slabdata 114 114 81
dentry_cache 11442 18560 136 29 1 : tunables 120 60 8 : slabdata 640 640 180
filp 7607 10000 192 20 1 : tunables 120 60 8 : slabdata 500 500 120
names_cache 19 19 4096 1 1 : tunables 24 12 8 : slabdata 19 19 0
avc_node 14 72 52 72 1 : tunables 120 60 8 : slabdata 1 1 0
selinux_inode_security 814 1170 48 78 1 : tunables 120 60 8 : slabdata 15 15 0
key_jar 14 30 128 30 1 : tunables 120 60 8 : slabdata 1 1 0
idr_layer_cache 170 203 136 29 1 : tunables 120 60 8 : slabdata 7 7 0
buffer_head 38892 39024 52 72 1 : tunables 120 60 8 : slabdata 542 542 0
mm_struct 108 135 448 9 1 : tunables 54 27 8 : slabdata 15 15 0
vm_area_struct 11169 14904 84 46 1 : tunables 120 60 8 : slabdata 324 324 144
fs_cache 82 177 64 59 1 : tunables 120 60 8 : slabdata 3 3 0
files_cache 108 140 384 10 1 : tunables 54 27 8 : slabdata 14 14 0
signal_cache 142 171 448 9 1 : tunables 54 27 8 : slabdata 19 19 0
sighand_cache 127 135 1344 3 1 : tunables 24 12 8 : slabdata 45 45 0
task_struct 184 246 1360 3 1 : tunables 24 12 8 : slabdata 82 82 0
anon_vma 3313 5842 12 254 1 : tunables 120 60 8 : slabdata 23 23 0
pgd 84 84 4096 1 1 : tunables 24 12 8 : slabdata 84 84 0
pid 237 303 36 101 1 : tunables 120 60 8 : slabdata 3 3 0
size-131072(DMA) 0 0 131072 1 32 : tunables 8 4 0 : slabdata 0 0 0
size-131072 0 0 131072 1 32 : tunables 8 4 0 : slabdata 0 0 0
size-65536(DMA) 0 0 65536 1 16 : tunables 8 4 0 : slabdata 0 0 0
size-65536 2 2 65536 1 16 : tunables 8 4 0 : slabdata 2 2 0
size-32768(DMA) 0 0 32768 1 8 : tunables 8 4 0 : slabdata 0 0 0
size-32768 9 9 32768 1 8 : tunables 8 4 0 : slabdata 9 9 0
size-16384(DMA) 0 0 16384 1 4 : tunables 8 4 0 : slabdata 0 0 0
size-16384 6 6 16384 1 4 : tunables 8 4 0 : slabdata 6 6 0
size-8192(DMA) 0 0 8192 1 2 : tunables 8 4 0 : slabdata 0 0 0
size-8192 5 5 8192 1 2 : tunables 8 4 0 : slabdata 5 5 0
size-4096(DMA) 0 0 4096 1 1 : tunables 24 12 8 : slabdata 0 0 0
size-4096 205 205 4096 1 1 : tunables 24 12 8 : slabdata 205 205 0
size-2048(DMA) 0 0 2048 2 1 : tunables 24 12 8 : slabdata 0 0 0
size-2048 260 270 2048 2 1 : tunables 24 12 8 : slabdata 135 135 0
size-1024(DMA) 0 0 1024 4 1 : tunables 54 27 8 : slabdata 0 0 0
size-1024 204 204 1024 4 1 : tunables 54 27 8 : slabdata 51 51 0
size-512(DMA) 0 0 512 8 1 : tunables 54 27 8 : slabdata 0 0 0
size-512 367 464 512 8 1 : tunables 54 27 8 : slabdata 58 58 0
size-256(DMA) 0 0 256 15 1 : tunables 120 60 8 : slabdata 0 0 0
size-256 487 495 256 15 1 : tunables 120 60 8 : slabdata 33 33 0
size-128(DMA) 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
size-128 2242 2490 128 30 1 : tunables 120 60 8 : slabdata 83 83 0
size-64(DMA) 0 0 64 59 1 : tunables 120 60 8 : slabdata 0 0 0
size-32(DMA) 0 0 32 113 1 : tunables 120 60 8 : slabdata 0 0 0
size-64 1409 2950 64 59 1 : tunables 120 60 8 : slabdata 50 50 0
size-32 3596 3842 32 113 1 : tunables 120 60 8 : slabdata 34 34 0
kmem_cache 145 150 256 15 1 : tunables 120 60 8 : slabdata 10 10 0
[root@localhost ~]# slabtop -d 5
Active / Total Objects (% used) : 97257 / 113249 (85.9%)
Active / Total Slabs (% used) : 4488 / 4488 (100.0%)
Active / Total Caches (% used) : 101 / 146 (69.2%)
Active / Total Size (% used) : 15076.34K / 17587.55K (85.7%)
Minimum / Average / Maximum Object : 0.01K / 0.16K / 128.00K
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
25776 25764 99% 0.05K 358 72 1432K buffer_head
16146 15351 95% 0.08K 351 46 1404K vm_area_struct
15138 7779 51% 0.13K 522 29 2088K dentry_cache
9720 9106 93% 0.19K 486 20 1944K filp
7714 7032 91% 0.27K 551 14 2204K radix_tree_node
5070 5018 98% 0.05K 65 78 260K sysfs_dir_cache
4826 4766 98% 0.01K 19 254 76K anon_vma
4824 3406 70% 0.48K 603 8 2412K ext3_inode_cache
3842 3691 96% 0.03K 34 113 136K size-32
2190 2174 99% 0.12K 73 30 292K size-128
1711 1364 79% 0.06K 29 59 116K size-64
1210 1053 87% 0.33K 110 11 440K inode_cache
1196 1147 95% 0.04K 13 92 52K Acpi-Operand
1170 814 69% 0.05K 15 78 60K selinux_inode_security
936 414 44% 0.05K 13 72 52K journal_head
747 738 98% 0.43K 83 9 332K shmem_inode_cache
693 617 89% 0.35K 63 11 252K proc_inode_cache
676 615 90% 0.02K 4 169 16K Acpi-Namespace
609 136 22% 0.02K 3 203 12K biovec-1
495 493 99% 0.25K 33 15 132K size-256
480 384 80% 0.12K 16 30 64K bio
440 399 90% 0.50K 55 8 220K size-512
312 206 66% 0.05K 4 78 16K delayacct_cache
303 209 68% 0.04K 3 101 12K pid
290 290 100% 0.38K 29 10 116K sock_inode_cache
[root@localhost ~]# cat /etc/sysctl.conf
# Kernel sysctl configuration file for Red Hat Linux
# Controls IP packet forwarding
net.ipv4.ip_forward=0
# Controls source route verification
net.ipv4.conf.default.rp_filter=1
# Do not accept source routing
net.ipv4.conf.default.accept_source_route=0
# Oracle
net.ipv4.ip_local_port_range=1024 65000
net.core.rmem_default=4194304
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=262144
net.ipv4.tcp_rmem=4096 65536 4194304
net.ipv4.tcp_wmem=4096 65536 4194304
# Keepalive Oracle
net.ipv4.tcp_keepalive_time=3000
net.ipv4.tcp_keepalive_intvl=30
net.ipv4.tcp_keepalive_probes=15
net.ipv4.tcp_retries2=3
net.ipv4.tcp_syn_retries=2
net.ipv4.tcp_sack=0
net.ipv4.tcp_timestamps=0
net.ipv4.tcp_window_scaling=0
# Oracle
fs.file-max = 6553600
fs.aio-max-nr=3145728
kernel.shmmni=4096
kernel.sem=250 32000 100 142
kernel.shmmax=2147483648
kernel.shmall=3279547
kernel.msgmnb=65536
kernel.msgmni=2878
kernel.msgmax=8192
kernel.exec-shield=0
# Controls the System Request debugging functionality of the kernel
kernel.sysrq=1
kernel.panic=60
kernel.core_uses_pid=1
[root@localhost ~]# free | grep Swap
Swap: 3148700 319916 2828784
[root@localhost ~]# cat /etc/fstab | grep "/dev/shm"
tmpfs /dev/shm tmpfs size=1024M 0 0
[root@localhost ~]# df | grep "/dev/shm"
tmpfs 1048576 452128 596448 44% /dev/shm
NON-DEFAULT DB PARAMETERS:
db_block_size 8192
memory_target 633339904 /* automatic memory management */
open_cursors 300
processes 256
disk_async_io TRUE
filesystemio_options SETALL
|
|
|
Posts:
1,253
Registered:
06/21/07
|
|
|
Re: 100% CPU Usage Overhead running EM DBConsole 11g on OEL-5.2
Posted:
2008/06/14 11:55
in response to: NJ
|
|
|
How big of an SGA are you using? Since you are not using kernel hugepages, this gets allocated as ordinary memory. If you have the "touch every SGA page first" option set in the init file, this can make the rdbms process and the kernel VM go mad.. er, use tons of CPU.
|
|
|
Posts:
257
Registered:
05/05/06
|
|
|
Re: 100% CPU Usage Overhead running EM DBConsole 11g on OEL-5.2
Posted:
2008/06/14 15:26
in response to: TommyReynolds
|
|
|
Thanks.
How big of an SGA are you using?
sga_max_size 633339904
memory_target 633339904
memory_max_target 633339904
with Automatic Memory Management enabled. Currently the EM Memory Advisors page shows:
Automatic Shared Memory Management Enabled
Total SGA Size (MB) 456
SGA Component Current Allocation (MB)
Shared Pool 252
Buffer Cache 184
Large Pool 4
Java Pool 12
Other 4
If you have the "touch every SGA page first" option set in the init file, this can
make the rdbms process and the kernel VM go mad.. er, use tons of CPU.
There is no such an option like "touch every SGA page first" set in the init file,
parameter file (regular nor hidden parameters). Furthermore, searching oracle
documentation on tahiti.oracle.com for "touch every SGA page first" (either 9.2,
10.2 or 11g), the result is - ZERO items.
|
|
|
Posts:
257
Registered:
05/05/06
|
|
|
Re: 100% CPU Usage Overhead running EM DBConsole 11g on OEL-5.2
Posted:
2008/06/14 15:56
in response to: NJ
|
|
|
Forgot.
Enabling or disabling Automatic Memory Management and setting SGA, shared_pool
etc manually, doesn't play any role here. The point is in the facts that:
1. 11g installed on OEL-5.0 works on OEL-5.0
2. 11g installed on OEL-5.0 works on OEL-5.2
3. 11g installed on OEL-5.2 doesn't work on OEL-5.2 (%CPU = 99%)
|
|
|
Posts:
36
Registered:
10/16/07
|
|
|
Re: 100% CPU Usage Overhead running EM DBConsole 11g on OEL-5.2
Posted:
2008/06/24 18:29
in response to: NJ
|
|
|
Oracle running on Enterprise Linux 5.0 is taking 100% CPU when the OEM is started. Please let me know , how to handel this problem.
Thanks
jeevan.
|
|
|
Posts:
257
Registered:
05/05/06
|
|
|
Re: 100% CPU Usage Overhead running EM DBConsole 11g on OEL-5.2
Posted:
2008/06/26 0:02
in response to: user600431
|
|
|
Shutdown OEM, login as SYSMAN user and restart the provisioning daemon by executing the two packaged procedures
SYSMAN> execute MGMT_PAF_UTL.STOP_DAEMON
PL/SQL procedure successfully completed.
SYSMAN> execute MGMT_PAF_UTL.START_DAEMON
PL/SQL procedure successfully completed.
Start OEM again and the problem is gone.
WARNING: You must take this on your own responsibility!
|
|
|
Posts:
14
Registered:
09/04/08
|
|
|
Re: 100% CPU Usage Overhead running EM DBConsole 11g on OEL-5.2
Posted:
2008/09/04 10:20
in response to: NJ
|
|
|
This worked great on my Windoze XP SP2 11g (11.1.0.6.0) install.
I just wish I understould why.
|
|
|
Posts:
257
Registered:
05/05/06
|
|
|
Re: 100% CPU Usage Overhead running EM DBConsole 11g on OEL-5.2
Posted:
2008/09/04 11:14
in response to: westside
|
|
|
This worked great on my Windoze XP SP2 11g (11.1.0.6.0) install.
I just wish I understould why.
I didn't try to figure out why. It just works. Anyway this is a bug that must be fixed by Oracle.
NJ
|
|
|
Posts:
257
Registered:
05/05/06
|
|
|
Re: 100% CPU Usage Overhead running EM DBConsole 11g on OEL-5.2
Posted:
2008/09/22 19:21
in response to: NJ
|
|
|
The problem is finally fixed in Oracle 11g Release 11.1.0.7.0 (patch 6890831) from 18-SEP-2008.
NJ
|
|
|
Posts:
12
Registered:
10/25/06
|
|
|
Re: 100% CPU Usage Overhead running EM DBConsole 11g on OEL-5.2
Posted:
2008/12/02 11:05
in response to: NJ
|
|
|
anyone know where I can get this release? The Oracle downloads section only has 11.1.0.6.0.
|
|
|
Posts:
257
Registered:
05/05/06
|
|
|
Re: 100% CPU Usage Overhead running EM DBConsole 11g on OEL-5.2
Posted:
2008/12/02 12:50
in response to: user538801
|
|
|
anyone know where I can get this release? The Oracle downloads section only has 11.1.0.6.0.
You must have a Metalink account to get this release. But you do not need the 11.1.0.7.0 release to get the problem fixed. Just re-download the linux_11gR1_database_1013.zip on OTN site. It's a new file (the old one was silently replaced by it).
NJ
|
|
|
Posts:
21
Registered:
03/20/07
|
|
|
Re: 100% CPU Usage Overhead running EM DBConsole 11g on OEL-5.2
Posted:
2009/03/23 11:49
in response to: NJ
|
|
|
Hello,
A couple of months ago (when our database was at 11.1.0.6), we experienced the same issue. The problem went away after applying the procedure noted in this thread:
Shutdown OEM, login as SYSMAN user and restart the provisioning daemon by executing the two packaged procedures
SYSMAN> execute MGMT_PAF_UTL.STOP_DAEMON
PL/SQL procedure successfully completed.
SYSMAN> execute MGMT_PAF_UTL.START_DAEMON
PL/SQL procedure successfully completed.
Start OEM again and the problem is gone.
Since then we have upgraded our database and oem to 11.1.0.7 and the issue never came back.
Now, all of a sudden the problem has resurfaced. CPU usage is at 100% whenever EM DBConsole is running and returns to normal only after the service is stopped.
Reapplying the fix has no effect whatsoever.
Our database and oem are now at version 11.1.0.7. Windows 2003 x64 SP2.
Any ideas what to do?
Thanks,
Mark T.
Never mind, the problem seems to have gone away today...
Edited by: Mark_T on Mar 24, 2009 9:53 AM
|
|
|
|
Legend
|
|
Guru : 2500
- 1000000
pts
|
|
Expert : 1000
- 2499
pts
|
|
Pro : 500
- 999
pts
|
|
Journeyman : 200
- 499
pts
|
|
Newbie : 0
- 199
pts
|
|
Oracle ACE Director
|
|
Oracle ACE Member
|
|
Oracle Employee ACE
|
|
Java Champion
|
|
Helpful Answer
(5 pts)
|
|
Correct Answer
(10 pts)
|
|