E9.0 Ideas needed: Best way to achieve parallel processing (multithreaded processing)

JohnDanter2

JohnDanter2

VIP Member
Hi all

We have a process in our business that adds LPN TMS data to SHPNs as they reach a certain NXTR. A job then picks the SHPNs up, attaches all the F4216 F4843 (plus a few other things we need) and advances the SO lines (SHPN)

I was asked to speed this process up so I moved the all the code to a subsystem (SBS).
My idea was to have one version of the SBS per DCTO to help with splitting the load out. This kind of works fine until one type of DCTO overloads the queue and the SBS requests wait in the queue for that DCTO type. No way around it with the SBS doing the work.

So I've tweaked the SBS a second time to now just have ONE SBS version which I sped up, reduce the fields in the BSVW, not produce an output and now just wake up as before but this time just call a clone UBE of itself pass in the SHPN and end ready to pick up another request. Both the SBS and UBE are configured to run on a new queue, TMSSUB which has 70 threads (too many?)

In this particular test with 40 dropped orders, I can see 40 F986113 records get written very quickly, they then clear out really quickly too and I now have 40 jobs running in WSJ.
But it appears that only ever about 3-4 UBEs actually ever overlap. I was expecting 40 to all fly off and start about the same time, but they don't.
A lot of the 40 sit there at S whilst only around 3-4 are ever at P.

So there must be some setting somewhere (beyond me) that's preventing more than 3-4 UBEs ever running on TMSSUB at the same time. CPU, no idea...?

So my second idea was to stick with this SBS wake up and end approach but go back to 10 SBS but now also add 10 new queues.
This way if the system dropped a load of orders, each SBS would pick them up, spawn 3-4 UBEs at a time and we could in theory get 30 jobs running simultaneously. 3 per queue per SBS

Am I going mental?

Is there another way to handle a load like this?

Thanks

John
 
Yeah John ... just because you SUBMIT 40 jobs at once doesn't mean the system will RUN 40 jobs at once.
You need CNC help to do the configuration changes ... if it make sense to do so with your system. How do you know that your server(s) could handle that many jobs running at once? A CNC consultant may be needed.

An alternative is to think outside the box. Use a non-JDE toolset to do the processing you want - feed the job data into a database table that is your queue and have a queue manager similar to what you developed in JDE to monitor and hand-off jobs. Its guaranteed to be orders of magnitude faster because JDE has a tremendous amount of overhead associated with it. As long as the updates and logic needed doesn't get too complicated to replicate seriously consider this approach.
 
What are peoples thoughts on one SBS to detect and launch the UBEs but to have different UBEVERS per DCTO running in individual queues?

As if the limit is the UBE Kernel settings, Maximum number of processes,does that relate to per queue or per CPU? I'd imagine queue/kernel?
 
i can;t upload my template nor my par file

Bit odd JDLIst won't let us do that
 
Back
Top