GL Upload from Excel

carterjde

Member
We are looking for solutions/workarounds for our month end uploads of 5k+ line Journal Entries to E1. We have tried the built in options: Z-tables, AIS/Orchestrator and both have some limitations. Appreciate any input on this.
 
Hi, and welcome to JDEList!
It's usually a good idea to add your JDE version and platform information into your signature or at least into the test of your question, as the capabilities vary considerably from the oldest to the most recent versions of JDE. Regardless, my gut instinct whenever a large import into the G/L needs to be made is to use the Z file process. How have you been trying to load your z file, with a grid upload or a flat file import? The grid upload has a limit that can be modified - my company has the maximum set at 3000 rows for example. If you export Excel into a CSV or other flat file format, a table conversion works well for loading the z file, just be wary of embedded punctuation in description or other string columns (especially the same character being used as a field delimiter).
 
I agree with Kim, for a large amount of JEs like you are talking about I would think the Z-Tables would be your best option. What limitation are you running into with Z-tables?
 
Thank you for your responses Kim and Brian!

We are on E1 9.2, Tools Release 9.2.3, Windows servers using Oracle database.

We have two use cases: Users uploading data manually and an automated process. In both cases, Z-table process is cumbersome for us: It is a disconnected process with multiple steps, and the error handling is through the Work Center. The upload has multiple journal entries and cross referencing the messages to the line in the file can take time.
 
You mentioned Excel in the title of this thread. Are both the user and automated uploads in Excel or a different format?
 
One way to handle the multi-step z-file process is to create a custom program that reads the external data, parses, scrutinizes and then populates the z-file table. It then calls the z-file processor (and version) which in turn creates the journal entries (F0911). In the initial stages of developing the custom program, you will need to test out all possible scenarios that may cause errors. The source of the data format must always be the same and theoretically should always be. You can also have the journal entry batch program run automatically in the same job stream but this is dependent on user requirements. Most of my past clients preferred to be able to review, approve, and post as they do with manually entered journal entries. The above custom program should have an error listing just in case. But remember, if the original format of the source is always the same and you have rigorously tested out all scenarios there should be no errors. If so, keep working at it. There should also be tight controls on making sure the source of the data remains in the same format. My last client simply chose an option on an E1 page and voilà, thousands of journal entries appeared a couple minutes later!
 
Thanks for the detailed explanation. I will look into this. At your last client that implemented the E1 page, where does the user upload the file to? In a shared folder that is picked up by the program?
 
Thanks for the detailed explanation. I will look into this. At your last client that implemented the E1 page, where does the user upload the file to? In a shared folder that is picked up by the program?

The external CSV is picked up in a shared but secured folder.
 
The user upload is in Excel. The automated one is in CSV.

If you are open to a third party solution, look at LynX Business Integrator and its integrated components LynX Office and LynX File Exchanger, which let you upload files directly to E1 from Excel and files respectively. PM me directly or submit a post to our forum if you have any questions regarding this.

https://www.aellius.com/lynx-business-integrator/
 
Hello carterjde,

I am helping my client import GL and AP data from CSV files (from Concur). Can you tell limitations you saw with Orchestrator and AIS? Client is on release 9.2.
 
Performance: Smaller journal entries are not a problem. When we got over 1000 lines, it slowed down significantly.

Quirks: The AIS form service requests are like a remote control to the user interface (UI), so you need to pay attention to how the UI responds to data. Certain fields are visible/editable under certain conditions, so you need to issue multiple requests to get the state of the form before punching the data in. If one of the JEs fails, you can't just continue with the next JE. The data has to be "reset" to start over. In AP, if there is an error in the pay item, you can't process the next voucher right away. The header is locked. You have to cancel and start over, and the next voucher will be in a different batch.

Unit of work: An external file (like the one you have from Concur) should be treated as one unit of work, so ideally, partial uploads should not be allowed.
There are some challenges with error handling as well, especially when you want to collate an error to a line in the file.
 
Thanks for the detailed response. We came across some of these during our analysis.
 
I wrote a UBE that picks up a CSV from a mapped network folder and parses the data to a worktable.
Then you go from there


in order to bring the file pointer back into the UBE for future reads
00012 | -FetchNextLineFromFlatFile(B76B0220.FetchNextLineFromFlatFile)
| VA rpt_FullFilePath_PTH [PTH] -> szFlatFileName [PTH]
| VA rpt_GenericLongFilePtr_GENLNG [GENLNG] <> idFilePtr [GENLNG]
| VA rpt_F574211AGTSText01_Y57TXT1 [Y57TXT1] <- szFlatFileRecord [APTA]
| "100" -> mnRecordLength [RECL]
| VA rpt_cFileReadError_EV01 [EV01] <- cErrorCode [EV01]


// Loop through the CSV file until the last row has been read
00015 | -While VA rpt_cFileReadError_EV01 [EV01] is not equal to "1"

Do Custom Section(RS Get Next Field from CSV Record)
(do this for each field)
VA rpt_Field_To_Create_Worfile = rtrim(ltrim([VA rpt_szDTA1_FieldValue [DTA1]], <Blank>), <Blank>)

Insert target table when done

Pass in CSV file pointer. No need to bring it back on subsequent reads
00162 | | -FetchNextLineFromFlatFile(B76B0220.FetchNextLineFromFlatFile)
| | VA rpt_FullFilePath_PTH [PTH] -> szFlatFileName [PTH]
| | VA rpt_GenericLongFilePtr_GENLNG [GENLNG] -> idFilePtr [GENLNG]
| | VA rpt_F574211AGTSText01_Y57TXT1 [Y57TXT1] <- szFlatFileRecord [APTA]
| | "100" -> mnRecordLength [RECL]
| | VA rpt_cFileReadError_EV01 [EV01] <- cErrorCode [EV01]
00163 | End While



Section: Get Next Field from CSV Record [3]
Event: Do Section
Event GUID: 71f25299-1a9a-4657-abf7-ed3e618de000
00001 //
00002 // This code keeps a running moving count of where it is in the parsed data
00003 // and actually chops it off field by field as it goes along. This way it is
00004 // only every processing the first field in the remaining data
00005 //
00006 VA rpt_szDTA1_FieldValue [DTA1] = <NULL>
00007 VA sec_mnMATH01_CharPosition [MATH01] = <NULL>
00008 //
00009 -FindCharPositionInString(B7500150.FindCharPositionInString)
VA rpt_F574211AGTSText01_Y57TXT1 [Y57TXT1] -> szString [BDS4]
"," -> cCharToFind [EV01]
"0" -> mnStartingPosition [MATH80]
VA sec_mnMATH01_CharPosition [MATH01] <- mnPositionFound [MATH80]
00010 //
00011 -If VA sec_mnMATH01_CharPosition [MATH01] is less than "0"
00012 | VA rpt_szDTA1_FieldValue [DTA1] = VA rpt_F574211AGTSText01_Y57TXT1 [Y57TXT1]
00013 -Else
00014 | VA rpt_szDTA1_FieldValue [DTA1] = substr([VA rpt_F574211AGTSText01_Y57TXT1 [Y57TXT1]], "0", [VA sec_mnMATH01_CharPosition [MATH01]])
00015 | //
00016 | VA sec_SubstStart_INT01 [INT01] = [VA sec_mnMATH01_CharPosition [MATH01]]+1
00017 | VA sec_SubstrFor_INT01 [INT01] = length([VA rpt_F574211AGTSText01_Y57TXT1 [Y57TXT1]])-[VA sec_mnMATH01_CharPosition [MATH01]]
00018 | //
00019 | // Substr (DataField,StartPos,ForLength)
00020 | VA rpt_F574211AGTSText01_Y57TXT1 [Y57TXT1] = substr([VA rpt_F574211AGTSText01_Y57TXT1 [Y57TXT1]], [VA sec_SubstStart_INT01 [INT01]], [VA sec_SubstrFor_INT01 [INT01]])
00021 | //
00022 End If

I can send you the whole UBE if you like?
 
Thanks John. My client has similar UBEs for other uploads. We are looking for a more real time solution with better error handling, just like carterjde described earlier in this thread.
 
Back
Top