• Introducing Dark Mode! Switch by clicking on the lightbulb icon next to Search or by clicking on Default style at the bottom left of the page!

R42800 Sales Update Data Selection and Job Performance

SL922

Active Member
We have R42800 no custom code for the version data selection: F4211.DOC <> 0, F4211.NXTR = 620, select 6 DCTOs with Equal To
ran for 35 mins processed 3923 records or 21 mins processed 2595 records
I would like to check to see if this is normal process time and the data selection is correct and efficient
I thought when data selection to select <> which will do a full table scan but I found standard JDE versions have the same data selection

Could you please compare with your R42800 to see if the performance is about right or if there is anything I can do to improve the job performance?
Thanks in advance to share your experience


E910, TR 9.1.5.10
 

HolderAndrew

Well Known Member
Hi,

there are many documents out there in Oracle that describe how performance can be improved in R42800 - running parallel versions, proc option selection are data selection techniques are the main methods to improve throughput as long as each version has unique data selection. For example try and balance selection of 3 versions each selecting on 2 DCTOs each.

Even though you are running standard i would still analyse debug logs and do this for a couple of orders both local and server side to just get a feel of where main time gaps are done (ie. in which BSFNs) as this might point you to the proc options that control them.

R42800 is an expensive process and does a lot of stuff so most companies will run as many versions as you need to get the volumes through within your required window.


Hope it helps

Andrew
 

jdedwardsuser

Active Member
You can still check but it runs in our system for around 10-15 mins and we have versions by company /warehouses at different times . We have different versions for some doc types and different for transfers not to affect AR . I dont think it is looking too bad . It will be quicker if you also do batch ship confirm . If sales update has to ship confirm and relieve on hand then it might take longer because it is doing batch ship for you as well .
 

Larry_Jones

Legendary Poster
We have R42800 no custom code for the version data selection: F4211.DOC <> 0, F4211.NXTR = 620, select 6 DCTOs with Equal To
ran for 35 mins processed 3923 records or 21 mins processed 2595 records
I would like to check to see if this is normal process time and the data selection is correct and efficient
I thought when data selection to select <> which will do a full table scan but I found standard JDE versions have the same data selection

Could you please compare with your R42800 to see if the performance is about right or if there is anything I can do to improve the job performance?
Thanks in advance to share your experience


E910, TR 9.1.5.10
How often do you run this?
 

ice_cube210

VIP Member
There is obviously the factor of the underlying hardware, size of the tables, number of indexes on them, etc that would be a variable that would affect the performance of the R42800 when comparing to other JDE systems. As others have suggested you could debug and analyze through performance workbench to see where the time is being spent.

As Andrew suggested parallelism is the easy answer (till you hit a bottleneck on CPU, IO, etc). I have seen with the R42800 and any update job in E1, the more records its processing, its throughput reduces. If we were to look at the records processed per minute as a metric in your own examples, you had one execution where 3923 records processed in 35 mins ( 3923/35 = 112 records per minute) and in the second example, 2595 records processed in 21 mins (2595/21 = 123 records per minute). There are some general assumptions here that the two data sets were identical in nature with respect to how R42800 would process them, but you get the point. I have tested this with data sets where I let it run over the whole data set first, restored data back, and then ran again splitting up the data selection (mutually exclusive data sets) and then ran again in parallel and each piece ran quicker so overall time was quicker.
 
Top