We are heavily using the QDAZOINIT jobs, which run SQL calls to our production data from our business intelligence tool.
There is a way to optimize your system so these jobs don't "take over" and slow down your system and other batch jobs. You would have to understand what function you have using them before taking these steps
These prestart jobs can be configured to run below other batch jobs on the system and be more efficient.
We changed the QDASOINIT prestart jobs to have an initial number of jobs of 20, threshold of 11, additional number of jobs 10, and maximum number of uses at 3 (some people believe this should be 1). We then created a custom class with a run priority below our other batch jobs. Our batch jobs are 30 and this class uses 50. This seemed to help the most.
However, note that this is in a WORLD environment, I noted that the OneWorld white paper from a previous post is suggesting almost the opposite of this. The set up in that white paper would actually almost take our system down. There may be a difference because of the high level of other batch and interactive jobs on our system. Yet, we have found that the more uses these jobs are allowed, the less efficient they become, an issue which IBM has yet to address.
I hope this helps.