Oracle 9.2.0.7 Performance very poor

nectarsoft

Active Member
I am attaching the complete Illustraive of case. Include JDE.INI and Oracle Databse .ORA File

Machine Configuration:
OS: Sun Solaris V440
CPU: 4
RAM: 8 GB
Hard Disk: 200 GB

JDE Configuration
Release: XE
Service Pack : SP21
Update 7 (XU7)

This Server act as ES and Database Box.
We have tuned the Solaris Server with the latest patches and it seems to be work fine except when it comes to database.

Oracle 9.2.0.7 is the database.
I am attaching the screen shot of Oracle Memory Tuning.
The strange thing is when we went to provide Buffer Size of 2 GB it shows the value in negative. But .ora file shows the correct value with 2 GB


Time factor:
1 Installing enterprise Server CD – took 1 day per pathcode. Hence it took 4 day to complete Enterprise Server Cd part.

2. Full Pkg- Client full pkg completes in 3 hrs, but Server full pkg took around 36 hrs to complete. It was gbrspec paktotam process that was running only. Rest all process finished in 3 hrs at server too.


3. Now when we are applying Update 7 XU7 – it again taking 12 hrs per pathcode.

All this things are really distrurbing a lot as my server configuration is too good.
My oracle utilize only 2 to 4 % of CPU.

In Windows the Same process took almost a day.
Is there any Tuning of Oracle Server on Solaris?
or any suggestion to improve oracle performace?
 

Attachments

  • 102783-Oracle performace very poor.doc
    43.5 KB · Views: 177
Hello,

I wouldn't think the paktotam issue is a database problem. If the client is building fine in 3 hours, then your database seems to be working great. The paktotam process actually takes the .pak files and unpacks them. So, bascially all the DB intensive "stuff" is done during the client side build. There wasn't a backup going on during the build by any chance??
 
You state that installing the Enterprise Server CD took 1 day per pathcode and that the server portion of the package build lasted 36 hours. Neither of those things have anything to do with your database. One could deduce from your problem description that you have some serious I/O problems with your servers disk subsystem. Although that may not be the actual root cause, you should examine SAR data for the period of time you were building the package on the Enterprise Server. %WIO is something you should pay close attention to and for that matter %USR and %SYSTEM. If %IDLE is between 0 and 5%, chances are you have a bottleneck with disk or CPU. You can also monitor in realtime using vmstat and iostat utilities.
 
I am a team member of nectarsoft. In his absence i am putting the case.


I have again submiited my full pkg yesterday night and still not completed till yet. It is around 18 hrs.

From P9622, i can see everything is completed except GBRSPEC and RDASPEC.

I am attaching the current status of vmstat, iostat and prstat.


bash-2.03$ vmstat
procs memory page disk faults cpu
r b w swap free re mf pi po fr de sr s0 s1 s2 -- in sy cs us sy id
0 3 0 18909752 3365848 19 232 261 0 0 0 0 0 298 9 0 568 1594 1068 4 1 95

bash-2.03$ iostat

tty sd0 sd1 sd2 nfs1 cpu
tin tout kps tps serv kps tps serv kps tps serv kps tps serv us sy wt id
0 328 0 0 0 1418 298 16 57 9 3 0 0 0 4 1 49 46
bash-2.03$




CPU Usage - prstat -s cpu
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
618 jdeb7333 37M 11M sleep 60 0 0:03.49 0.2% paktotam/3
624 jdeb7333 37M 11M sleep 60 0 0:03.28 0.2% paktotam/3
 
Vivek,

These stats are cumulative since the server startup and do not reflect the current performance directly.

Please, run

vmstat 30 30

and

iostat 30 30

instead. While the build is still running. There will be more data then...
 
Alex,

Find attached:
VMSTAT
procs memory page disk faults cpu
r b w swap free re mf pi po fr de sr s0 s1 s2 -- in sy cs us sy id
0 0 0 18934184 3790824 5 31 129 0 0 0 0 0 108 6 0 398 527 711 0 1 99
0 1 0 18838128 3502752 0 0 360 0 0 0 0 0 326 5 0 497 825 811 0 1 99
0 1 0 18838112 3502472 0 3 377 0 0 0 0 0 350 5 0 507 737 755 0 0 99
0 1 0 18837944 3502016 0 3 410 0 0 0 0 0 380 4 0 535 799 804 0 0 99
0 1 0 18837856 3501624 0 0 370 0 0 0 0 0 342 5 0 499 738 751 0 0 99
0 1 0 18837872 3501336 0 0 380 0 0 0 0 0 351 5 0 507 746 760 0 0 99
0 1 0 18837864 3501072 0 0 404 0 0 0 0 0 375 5 0 530 773 791 0 1 99
0 1 0 18837864 3500840 0 0 406 0 0 0 0 0 375 5 0 531 784 792 0 1 99
0 1 0 18837856 3500624 0 0 407 0 0 0 0 0 375 5 0 531 789 800 0 1 99
0 1 0 18837864 3500336 0 0 378 0 0 0 0 0 352 5 0 507 730 755 0 1 99
0 1 0 18837856 3500048 0 0 389 0 0 0 0 0 358 5 0 513 767 773 0 0 99
0 1 0 18837856 3499776 0 0 406 0 0 0 0 0 374 5 0 530 784 793 0 0 99
0 0 0 18837856 3499512 0 0 367 0 0 0 0 0 315 5 0 473 730 749 0 0 99
0 0 0 18837856 3499256 0 0 390 0 0 0 0 0 341 5 0 497 761 773 0 0 99
0 0 0 18837856 3498904 0 0 383 0 0 0 0 0 330 5 0 486 758 768 0 0 99
0 0 0 18837856 3498656 0 0 366 0 0 0 0 0 337 5 0 496 714 742 0 1 99
0 1 0 18837856 3498400 0 0 395 0 0 0 0 0 366 5 0 522 774 783 0 1 99
0 1 0 18837848 3498144 0 0 404 0 0 0 0 0 377 5 0 533 779 791 0 1 99
0 1 0 18837848 3497832 0 0 399 0 0 0 0 0 367 5 0 523 770 787 0 1 99
procs memory page disk faults cpu
r b w swap free re mf pi po fr de sr s0 s1 s2 -- in sy cs us sy id
0 1 0 18837856 3497360 0 0 377 0 0 0 0 0 350 5 0 505 740 754 0 1 99
0 1 0 18837848 3497072 0 0 371 0 0 0 0 0 341 4 0 499 730 751 0 0 99
0 0 0 18837848 3496760 0 0 398 0 0 0 0 0 371 5 0 527 758 782 0 0 99
0 1 0 18837848 3496256 0 0 398 0 0 0 0 0 367 5 0 523 783 789 0 1 99
0 0 0 18837848 3495808 0 2 406 0 0 0 0 0 376 5 0 535 807 810 0 1 99
0 0 0 18837848 3495312 0 0 369 0 0 0 0 0 340 5 0 496 728 745 0 0 99
0 1 0 18837856 3494744 0 0 362 0 0 0 0 0 336 5 0 491 718 732 0 0 100
0 1 0 18837848 3494184 0 0 397 0 0 0 0 0 346 5 0 501 781 785 0 1 99
0 1 0 18837848 3493832 0 0 354 0 0 0 0 0 308 5 0 464 704 720 0 0 99
0 0 0 18837848 3493552 0 0 378 0 0 0 0 0 326 5 0 482 743 758 0 0 99
0 1 0 18837856 3493144 0 0 363 0 0 0 0 0 314 5 0 470 717 735 0 1 99

IOSTAT,
iostat 30 30
tty sd0 sd1 sd2 nfs1 cpu
tin tout kps tps serv kps tps serv kps tps serv kps tps serv us sy wt id
0 312 0 0 0 666 112 4 43 6 3 0 0 0 0 1 10 89
0 11 0 0 0 1840 380 3 17 5 2 0 0 0 0 0 25 75
0 5 0 0 0 1647 339 3 17 5 2 0 0 0 0 0 25 75
0 5 0 0 0 1688 350 3 17 5 2 0 0 0 0 1 25 75
0 5 0 0 0 1838 378 3 17 5 2 0 0 0 0 1 25 75
0 5 0 0 0 1819 379 3 17 5 3 0 0 0 0 0 25 75
0 5 0 0 0 1796 369 3 17 5 2 0 0 0 0 1 25 75
0 5 0 0 0 1691 349 3 17 5 2 0 0 0 0 1 25 74
0 5 0 0 0 1772 365 3 17 5 3 0 0 0 0 0 25 75
0 5 0 0 0 1716 348 3 17 5 2 0 0 0 0 0 25 75
0 5 0 0 0 1682 324 3 17 5 2 0 0 0 0 0 25 75
0 5 0 0 0 1788 347 3 17 5 2 0 0 0 0 0 25 75
0 5 0 0 0 1633 321 3 17 5 2 0 0 0 0 0 25 75
0 5 0 0 0 1713 352 3 17 5 2 0 0 0 0 0 25 75
0 5 0 0 0 1781 371 3 17 5 3 0 0 0 0 1 25 75
0 5 0 0 0 1776 366 3 17 5 2 0 0 0 0 0 25 75
0 5 0 0 0 1830 375 3 17 5 2 0 0 0 0 1 25 75
0 11 0 0 0 1626 336 3 17 5 2 0 0 0 0 1 25 75
0 5 0 0 0 1704 352 3 16 5 2 0 0 0 0 0 25 75
tty sd0 sd1 sd2 nfs1 cpu
tin tout kps tps serv kps tps serv kps tps serv kps tps serv us sy wt id
0 5 0 0 0 1820 375 3 17 5 2 0 0 0 0 1 25 75
0 11 0 0 0 1793 365 3 17 5 2 0 0 0 0 1 25 75
0 27 0 0 0 1797 371 3 17 5 2 0 0 0 0 1 25 75
0 5 0 0 0 1625 330 4 17 5 2 0 0 0 0 0 25 75
0 5 0 0 0 1697 343 3 17 5 2 0 0 0 0 0 25 75
0 5 0 0 0 1741 333 3 17 5 2 0 0 0 0 1 25 75
0 5 0 0 0 1579 309 4 17 5 3 0 0 0 0 0 25 75
0 5 0 0 0 1703 328 3 17 5 3 0 0 0 0 0 25 75
0 6 0 0 0 1615 310 4 17 5 2 0 0 0 0 0 25 75
0 3 0 0 0 1644 321 3 17 5 2 0 0 0 0 1 25 75
0 3 0 0 0 1631 312 4 17 5 3 0 0 0 0 0 25 75

--Ravi
 
As Charles Anderson alluded to, your Disk configuration may be at fault.
Can you describe it?

Some other general questions/statements:

1. Have the database schemas been analyzed?
2. I'm not familiar with Sun's architecture but under HPUX we needed to tune some kernel parameters appropriately. Have you checked Oracle's websites for suggestions here?
3. Is the system paging at all? Given the information you gave us it shouldn't be unless something is configured wrong.
4. The the size, number, and placement of the redo logs would be of interest.
5. What does the Alert log say?
 
Would you try again with extended statistics option? This should be "iostat -xtc 30 30" on Sun architecture, once again during the course of a full build. I have AIX and HP-UX but no Solaris, but it should be close enough. It looks like disk "sd1" is the one you should focus on and it would be good to know the %b (percent time spent servicing requests) and %w (percent wait in queue) for that particular disk.

One disk shows the most activity out of three disks - is this not setup in a solid RAID configuration or is it setup as JBOD?

If by chance you happen to have SAR data, try this command:

sar -d -o /var/tmp/disk.sar 10 100000 | grep -i sd1

Post output if you have it, otherwise the iostat -x or iostat -xtc output will have to do.
 
Yes, the disk is clearly the bottleneck.

With only 200GB, I wouldn't be surprised, if that's just a single disk, or a two-three disk array.

It calls for a hardware upgrade - you need more disks (as opposed to "disk space", which you may already have enough of)...
 
bash-2.03# sar -d -o /var/tmp/disk.sar 10 100000 | grep -i sd1
sd1 99 1.1 386 3719 0.0 3.0
sd1,a 0 0.0 0 0 0.0 0.0
sd1,b 0 0.0 0 0 0.0 0.0
sd1,c 0 0.0 0 0 0.0 0.0
sd1,e 99 1.1 386 3719 0.0 3.0
sd1,f 0 0.0 0 0 0.0 0.0
sd1 99 1.2 375 3555 0.0 3.1
sd1,a 0 0.0 0 0 0.0 0.0
sd1,b 0 0.0 0 0 0.0 0.0
sd1,c 0 0.0 0 0 0.0 0.0
sd1,e 99 1.2 375 3555 0.0 3.1
sd1,f 0 0.0 0 0 0.0 0.0
sd1 99 1.2 402 3871 0.0 2.9
sd1,a 0 0.0 0 0 0.0 0.0
sd1,b 0 0.0 0 0 0.0 0.0
sd1,c 0 0.0 0 0 0.0 0.0
sd1,e 99 1.2 401 3871 0.0 2.9
sd1,f 0 0.0 0 0 0.0 0.0
sd1 99 1.2 382 3593 0.0 3.1
sd1,a 0 0.0 0 1 0.0 4.6
sd1,b 0 0.0 0 0 0.0 0.0
sd1,c 0 0.0 0 0 0.0 0.0
sd1,e 99 1.2 381 3592 0.0 3.1
sd1,f 0 0.0 0 0 0.0 0.0
sd1 99 1.2 359 3383 0.0 3.3
sd1,a 0 0.0 0 0 0.0 0.0
sd1,b 0 0.0 0 0 0.0 0.0
sd1,c 0 0.0 0 0 0.0 0.0
sd1,e 99 1.2 359 3383 0.0 3.3
sd1,f 0 0.0 0 0 0.0 0.0
sd1 99 1.2 341 3225 0.0 3.4
sd1,a 0 0.0 0 0 0.0 0.0
sd1,b 0 0.0 0 0 0.0 0.0
sd1,c 0 0.0 0 0 0.0 0.0
sd1,e 99 1.2 341 3225 0.0 3.4
sd1,f 0 0.0 0 0 0.0 0.0
sd1 99 1.1 292 2843 0.0 3.9
sd1,a 0 0.0 0 0 0.0 0.0
sd1,b 0 0.0 0 0 0.0 0.0
sd1,c 0 0.0 0 0 0.0 0.0
sd1,e 99 1.1 291 2843
 
There you have it - as Alex stated your disk is the bottleneck. Prior to his post, I wanted a bit more evidence to be able to show the actual busy percentage which in fact is 99%! You should spread the load across multiple disks in a hardware RAID configuration, the more the merrier, backing up what Alex said.
 
Filesystem kbytes used avail capacity Mounted on
/dev/dsk/c1t0d0s0 20174761 12708752 7264262 64% /
/proc 0 0 0 0% /proc
fd 0 0 0 0% /dev/fd
mnttab 0 0 0 0% /etc/mnttab
swap 19011488 16 19011472 1% /var/run
swap 19012400 928 19011472 1% /tmp
/dev/dsk/c1t1d0s0 139180548 47028069 90760674 35% /u03
/dev/dsk/c1t0d0s5 69555418 80025 68779839 1% /u02
/dev/dsk/c1t2d0s0 70592505 15755061 54131519 23% /u01

thanks charles and Alex

yes! we could sense what alex told us. further efforts to it.
added c1t2d0s0 (72 GB) moved /u01/jdedwardsoneworld folders to newly mounted one(more space)

still the same issue, /u03 used for oracle /u02 is empty
 
I'm not sure that I am interpreting your statement correctly, but just moving from one disk to another isn't what I meant. More disks setup in a RAID 0, 5 or 10 configuration rather than single volume/mount point (such as disk "c1t1d0s0" mounted at /u03)
will provide better performance.

It is also a good idea to keep Oracle disks on their own separate RAID volume and host adapter if possible when running App and DB on the same host.
 
we have Ipv6 in solaries although, jde uses ipv4 does this create an issue ??
 
After trying all possibility, we have formatedthe server.
Now we are running ES on Solaris 9.

This have solved the problem, and the full package completed successfully in 6 hrs.

Thanks List.
 
Back
Top