• Introducing Dark Mode! Switch by clicking on the lightbulb icon next to Search or by clicking on Default style at the bottom left of the page!

F986110 Performance Impact


Active Member


I have been perplexed for sometime with a problem of the F986110 table and
it's related job queue Oracle processes. I have written about this problem
in the past and even offered solutions. However upon more investigation I
have found other difficulties. I'm interested in knowing if this problem
has been fixed in XE or if you have come to a better resolution than I.

Each job queue repeatedly scans the F986110 table for new job submissions.
A new job is designated by it's status. After the job is completed the
information is retained in the same table but the status is changed. Over
time this table will grow very large unless you implement some sort of
cleanup job. However even deleting data from the table will not actually
shrink it in size because the high water mark will not be reduced. The
effect is that any operation needing to scan the entire table for data will
always scan up to the high water mark (most of the SQL on this table will
execute FTS regardless of index because WHERE statements are not selective

My initial solution was to cache the table (ALTER TABLE n CACHE). This
stopped the application from going to disk all the time for the data but it
has introduced a new dilemma. Since the table is now loaded into memory
almost constantly the CPU has no trouble reading the blocks in memory very
quickly. However with 5+ processes touching this table at all times getting
a consistent view of the data requires tremendously inflated numbers of
buffer gets on this table. The CPU say, "hey no trouble it's all right here
in memory, let me get it 100 times!", this results in a CPU use of about 10%
per process as it busily reads all the data over and over again. It also
results in a very healthy looking buffer cache hit ratio but this is not a
good way to obtain it.

I would like to know if this is still a problem in XE, to determine if you
are having a problem also. Here is what you need to know/look for.

1. How many job queues do you have configured?
2. How many records are in your F986110 table and what is it's current size?
3. What is the average CPU use for each of these processes (use top in NIX)?
4. What is the interval your queues are set up to query the F986110 table?
5. What is the high water mark on the F986110 table?

Please let me know if you also think this might be a problem. A potential
solution is to move the records to an job archive table as soon as possible
and build an JDE interface to show the records in the job archive table.
That way the table stays very small. In 8i you can configure a KEEP buffer
pool and stuff the table in that so that it doesn't effect standard buffer
pool hit ratios. In 9i you will be able to configure a completely separate
KEEP buffer pool just for this table.


This e-mail is intended for the use of the addressee(s) only and may contain privileged, confidential, or proprietary information that is exempt from disclosure under law. If you have received this message in error, please inform us promptly by reply e-mail, then delete the e-mail and destroy any printed copy. Thank you.



Legendary Poster

we also encountered this problem a ways back and spent a fair amount of time trying to analyze / fix it. First off it is not a JDE problem but an Oracle problem.

There are 2 things you can do to correct:

1. Permanent solution: upgrade Oracle to 8i (8.1.6 or higher). The problem totally disappeared when we took this step.

2. Temporary Solution / Work Around: Write a script that analyzes table svmb733.f986110 (analyze table svmb733.f986110 compute statistics;). Submit said script via cron every 30 minutes. This temporary solution worked well for around 12 months when we were on Oracle 8.0.5.


Larry Jones
OneWorld B733.1, SP 11.3
HPUX 11, Oracle SE 8.1.6
SandBox: OneWorld XE SP15


Active Member

I have used solution #1 in the past to stop the full table scans and it
worked about 70% of the time, however there a number of different statements
being executed against the table with very low selectivity. The cost based
optimizer is rightly deciding to do FTS's in most cases. We are looking at
an upgrade and I will be interested in seeing what the results are. Thanks
for the input. Next week lets talk about locally managed tablespace and
large objects! The default storage settings and block usage settings are
atrocious. Anyone out there implementing custom storage settings? By the
way I mentioned moving our development box to single extent sized
tablespaces a few months ago and just want everyone to know that things
worked well. It was a real pain however.