FWIW, the indexes on E1 tables are horribly, horribly wrong with some tables not having a clustered index and some tables having no indexes at all, resulting in heap storage.
I have asked Oracle to do an index analysis of indexes, starting with the Central Objects database. I asked them to start with the CO tables since the new XML specs storage tables are now there. The indexes on CO tables are horrible and since the spec storage for auto package discovery and dynamic generation require higher performance of these tables, the seek performance has become more critical. Also, the serialized objects tables are there and their indexes are not optimal either.
For maximum web client performance we must ensure that serialized objects tables and specs tables used in dynamic generation are optimized. Having the proper indexes (and keeping them defragmented) is critical.
[ QUOTE ]
99% of these sorts of errors have a DB problem behind them. The only issue I have right now is on a Java Dynamic Connector coming from a system on the other end of a WAN (don't ask, and its being replaced by EBSS). Very occasionally there's a glitch on the WAN and a CallObject Kernel almost instantly zombies.
An especially memorable one was when at a client we implemented 8.97 on Orace 10.
The kernels would come up, run a few minutes, then zombie, new ones would start, run a few minutes and then zombie etc etc etc. Then after some time it progressively got even worse.
It turned out that the F986110 had a bad index on it, which as it was the test system hadn't been seen before, so a full table scan was OK. But the week before a developer, in thier infinte wisdom wrote a UBE interconnect which called another UBE on a PER LINE basis. This was fine on the instance it was developed on, but on the real test system resulted it 80,000 UBEs being submitted to the job queue all at once. This caused the full scan to take longer than the timeout period on the queue kernels, which then bombed and restarted, but crucially left the query running in the backgrouund, after about 15 mins the DB become so busy that everything else cascaded around it.
These issues are a complete nightmare to find. JAS makes it easier, since if the issue was caused by a JAS session it will usually log it. Alternatively UBEs and BFSNs caused by external systems can also cause it and these are a LOT harder. On the last two sites I've had decent DBAs and they've been able to tell me long running queries, locks etc on an alert basis. Finding the query is pretty much 90% of the battle, once you have a specific problem you can add an index, hint etc.
Literally last week the AB purge was taking 93 seconds on a lookup on the F4211. A quick addition of an index took the lookup rate in proof mode to 50 per second...
[/ QUOTE ]