E9.2 Using Orchestrator for large data mapping

louisphicc

Member
Hello,

I would need some advice on this one.
We have a use case for pricing were the users have a large CSV file containing pricing schedules to map in JDE.
They are currently doing it manually with the P4072 application, navigating between forms and copy-pasting the data.

In Orchestrator, I have managed to create the following pieces of the puzzle:
- a Connexion and a Connector to read the CSV file sitting on a FTP server
- a Form Requests to navigate through the different forms, to add one line of data and to process it
- an Orchestration to link the previous two with an iteration over each lines in the source file.
- another Orchestration that reads the output of the first one and sends an email with the error description (if any) or a success message.

I ran this orchestration for about 5000 lines and it took nearly an hour to process (withour error).
The manual process remains faster, but at least the user can do something else while the Orchestration is running...

I am a little disapointed how slow the Orchestrator is.
Any ideas to make it faster?
 

lfurino

Member
Hi Louis,

5000 records in an hour is not really that slow if you consider that it is doing. If it only took 1 second to press the ok (save) button and you did that 5000 times (once per record) that would take you over 83 minutes. In reality it probably takes more like 2-3 seconds for jde to process saving a record when you click the save button. That said you have a few options:

1. If you are loading multiple schedules for the same item, you could use an array input and load all of the schedules for that item at the same time. Then you only have to "press" the ok button one time for each group of pricing records.

2. You could use a Custom request and call the business function(s) instead of going through the application. That will be faster, but it may add some additional risk if you skip any of the other validations.

3. **SHAMELESS PLUG** - Your users can use the JDExcelerator to upload the records. We have an asynchronous mode (fire and forget) that can call orchestrations and process about 1,200 records per minute. We have a demo video of updating advanced pricing (P4072) which you can see here:
If you would like to see a demo or test it out you can send me an e-mail at [email protected]

Thanks,
Larry
 

louisphicc

Member
Hi Larry,

Thanks for your tips.
I checked the JDEExcelerator video and it seems very interesting.
I did not mention it, but we are actually using a custom P4072 (P554072) for our pricing needs.
It's very important that the Orchestration goes throught the same business logic as the app. would do.

What I was looking for is really something like your first tip.
I will dig into that to begin with.

That being said, I am sure we could have other uses cases for the JDEExcelerator.
Our end users are working a lot with spread sheets and they are doing manual/semi-automatic I/O betwen JDE and Excel.
The JDEExcelerator looks great to shortcut many steps.

Thanks again,
 

shearerj

Reputable Poster
Like the JDEExcelerator solution, you could create a parent/child Orchestration. The parent Orchestration iterates over the records on the FTP site and for each record, it "fires and forgets" the child Orchestration which does the form service request work asynchronously. When doing "fire and forget" it does make error handling, summarizing and other things a bit more challenging.
 
Call Orchestrations From Excel – The Easy Way to Make the Orchestrator Work for You.

cribeiro

Active Member
Hello,

I would need some advice on this one.
We have a use case for pricing were the users have a large CSV file containing pricing schedules to map in JDE.
They are currently doing it manually with the P4072 application, navigating between forms and copy-pasting the data.

In Orchestrator, I have managed to create the following pieces of the puzzle:
- a Connexion and a Connector to read the CSV file sitting on a FTP server
- a Form Requests to navigate through the different forms, to add one line of data and to process it
- an Orchestration to link the previous two with an iteration over each lines in the source file.
- another Orchestration that reads the output of the first one and sends an email with the error description (if any) or a success message.

I ran this orchestration for about 5000 lines and it took nearly an hour to process (withour error).
The manual process remains faster, but at least the user can do something else while the Orchestration is running...

I am a little disapointed how slow the Orchestrator is.
Any ideas to make it faster?
Hello!
Have you tried to run the mass update P45550? Maybe you can use it in the Form Request to update all you need at once.
 

jolly

Reputable Poster
Requests to navigate through the different forms, to add one line of data and to p
rray input and load all of the schedules for that item at the

Hello,

I would need some advice on this one.
We have a use case for pricing were the users have a large CSV file containing pricing schedules to map in JDE.
They are currently doing it manually with the P4072 application, navigating between forms and copy-pasting the data.

In Orchestrator, I have managed to create the following pieces of the puzzle:
- a Connexion and a Connector to read the CSV file sitting on a FTP server
- a Form Requests to navigate through the different forms, to add one line of data and to process it
- an Orchestration to link the previous two with an iteration over each lines in the source file.
- another Orchestration that reads the output of the first one and sends an email with the error description (if any) or a success message.

I ran this orchestration for about 5000 lines and it took nearly an hour to process (withour error).
The manual process remains faster, but at least the user can do something else while the Orchestration is running...

I am a little disapointed how slow the Orchestrator is.
Any ideas to make it faster?

Hi, I am also looking at importing a large amount of data from a CSV file. In my case, I am just poking the data straight into a JDE custom table,. A subsequent step will invoke a UBE to validate and process this data.

So... my Orchestration starts with a SFTP connector that reads the file and treats it as CSV. The next step is a custom Groovy Database connector that does a simple SQL Insert of a data row with a sql string and sqlInst.execute(). This step iterates over the previous step.
This works fine for a small sized CSV file - I tested it with about 10 rows. When I run a large file, say 100,000 rows, it blows up with a Java Exception reported on the server logs:

JobQueue.submitSynchOrchestration: Execution Exception in Thread java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: GC overhead limit exceeded

And the Orchestration output is:

{
"message": "Server returned HTTP response code: 204"
}

So... is there a way to prevent this? Should I be doing something after each iteration to clean-up? Or is this a bad approach? If so how is this sort of bulk load from CSV done?

Thanks
JohnO
 

jolly

Reputable Poster
Hi, I am also looking at importing a large amount of data from a CSV file. In my case, I am just poking the data straight into a JDE custom table,. A subsequent step will invoke a UBE to validate and process this data.

So... my Orchestration starts with a SFTP connector that reads the file and treats it as CSV. The next step is a custom Groovy Database connector that does a simple SQL Insert of a data row with a sql string and sqlInst.execute(). This step iterates over the previous step.
This works fine for a small sized CSV file - I tested it with about 10 rows. When I run a large file, say 100,000 rows, it blows up with a Java Exception reported on the server logs:

JobQueue.submitSynchOrchestration: Execution Exception in Thread java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: GC overhead limit exceeded

And the Orchestration output is:

{
"message": "Server returned HTTP response code: 204"
}

So... is there a way to prevent this? Should I be doing something after each iteration to clean-up? Or is this a bad approach? If so how is this sort of bulk load from CSV done?

Thanks
JohnO
Increasing the jvm memory from the tiny default resolved the error. However the orchestration is far, far too slow to be off any practical use for a 100k record file.
 
Top