• Welcome to the upgraded JDELIST forum and thank you for your patience.
    Please restrict discussions and issues regarding the new forum software to the Off Topic forum. We will be monitoring that forum for issues.
    If you have trouble logging in, please reset your password using the forgotten password form: https://www.jdelist.com/community/index.php?lost-password/
    If you are unable to successfully reset your password, please contact us: Click here!
    We hope that you enjoy the upgraded forum.
  • Introducing Dark Mode! Switch by clicking on the lightbulb icon next to Search or by clicking on Default style at the bottom left of the page!

Table Conversion..!!

vliyer

Member
Hi,
Happy Holidays..!!

Here is a workaround to avoid re-writing whole mapping and conversion
program in case of change in Input source file or location.

Create a work file in JDE with one field (length= Input record length).
use this work file in Table conversion program.
i.e. you are mapping one complete string of input to one field of output.

Write program (new) to use work file and populate actual tables - same as we
do in first table conversion program.

In this case, If at all your input location or source change, you need to
re-map only one field in first program, rest of the thing remains unchanged.

Hope this will work,it's working fine for us since every second day our
sweet CNC guys were changing server names, we came up with this workaround.

Thanks & Regards,

Venkat.
_________________________________________________________________________
Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.
 
Venkat,

I used almost the same method to populate F0911Z1 from a legacy Payroll system although not for that reason making easier to change the input location.

My solution is briefly:

1.) I made a report UBE (not TC) R5501005 for my main logic.
2.) I made a TC UBE R5501005TC to populate the work file from my input text file (fixed field length format), extracting some base field like EDTN (EDI Transaction Number) and fill a running number index field to preserve the original order of the lines.
3.) In my main UBE:
* print a header
* print my processing option values (Proof or Final mode, version of TC to call, how to handle rows and batch with error, etc.) and my groupping key values to make possible to differentiate the records created by the current running from the others, created by concurrent runnings.
* delete the workfile records based on the groupping key fields for this running
* run the TC (passing the groupping key key fields of course) and verify the result
* generate EDLN (EDI Line Number) field in the work file restarting it in each EDTN (transaction).
* generate EDBT (Batch Number) in Final mode
* extracting, converting (numbers, dates) the values, determining ANI and AID values (based on MCU, OBJ, SUB) making some other validations
* reporting every detected error on the printed output, printing all information making possible to identify the row and the error.
* inserting F0911Z1 record in Final mode (inserting rows with error depends on the requested error handling method)

After I processed all input row:
* delete the workfile records based on the groupping key fields for this running
* If I found error(s) in the input and running in Final mode and "delete batch with error" was requested in ProcOpt then I delete the records of the currently created batch from F0911Z1.

Finaly I print the identification values of the batch (EDUS, EDBT) statistical values (like number of input rows, # of errors, # of rows with errors, # of transactions, # of inserted rows, # of insert fails) and an ending status message (Succes, Error was detected, etc, etc.)

Now, you can see, there are several other advantages of this method too, like:

1.) Possible to make a printed output for documentation purpose, error reporting, statistical values, identification and ending status.
2.) Possible to make more complex logic (validation, value determination, line number generation, etc.) because you have much more tools in a "normal" UBE than in a TC.
etc.

Some thought about the work tables:

I begin all indecies of the table with my "goupping fields" (e.g. machine key, User ID, date, time). This make possible to distinguish (identify) the records of the current running from the others. It is important, when you place your work table into a shared datasource.
Generally I make the table into the System data source, I don't like to put it into the local Access because I have bad experiences with it.

Finally, you can find a thread (Subject: "Clearing work files in UBE's", initiated by "euroboy") where we discussed how to delete workfile records based on a partial key figured by mentioned "groupping" fields.

Hope, could help.

P.S.: I have promised a brief description, it haven't succeeded :))
Zoltán

B7332 SP11, ESU 4116422, Intel NT4, SQL 7 SP1
(working with B7321, B7331, XE too)
 
Zoltán,

This was interesting. We are preparing a Data Conversion from our legacy
system at the moment and one of my biggest frustrations has been the
inability to produce efficient printouts of failed records (the only thing I
found was the MDDebug Business Function which stops on the screen for every
message until you press OK - not too good if you have a thousand bad
records).

Can you please explain a little about the interface between R5501005 and
R5501005TC, especially "* run the TC (passing the groupping key key fields
of course) and verify the result". Was this done with an Interconnect?
Does the main UBE wait for the TC UBE to finish? How do you have them
"talk" to each other?

I also didn't understand how you managed to achieve "* reporting every
detected error on the printed output, printing all information making
possible to identify the row and the error".

Thanks

Wayne Ivory
Information Services
Iluka Resources Limited
B7332, SP10.1, Oracle 8.1.5

PS I probably won't get your response before Christmas as Australia is
generally half a day ahead of the rest of the world (in more ways than one!
;-) ) but I will be checking when I get back. Have a Merry one!
 

jastips

Member
HI,

Just I was searching for the answer for the same question long back. I found it but don’t implemented yet. Just I will give you my view. If you like it try it out.

We have one JDE ‘C’ business function : B4700240(Import File ToJDEfile). Which is depend on F47002 table. This table is going to have transaction type.

We need to create a transaction type and process type and file name.
And this business function will get the file name and it will insert in to the table.
Make sure table should have the same fields like text file. After this process we can call another process to copy those records to Z tables and next processes.

I hope you are not confusing with this. But if you test this or if you need further info please send me mail to Jastips@hotmail.com


JDE CNC Consultant
Indianapolis
USA
 
A Happy New Year to you Zoltán!

I must start by saying that I'm really impressed by your logical approach, too! It seems that not only Australia (Wayne Ivory) but North America is interested, also, in ...
<<"Does the main UBE wait for the TC UBE to finish? How do you have them "talk" to each other? I also didn't understand how you managed to achieve "* reporting every detected error on the printed output, printing all information making possible to identify the row and the error". Thanks Wayne Ivory >>
Thanks,
Adrian Chimirel

LIVE: B732.1 SP12.2, Oracle 805
SANDBOX: B733.3 SP3, Oracle 8i
RS/6000, Citrix, Win95&NT
 
Hi Adrian and Wayne,

I try to answer your questions briefly.

1.) The main UBE and the TC UBE is mapped to run locally in OCM.

2.) The TC UBE is called through Report Interconnect from the main UBE.

3.) The main UBE wait for the TC UBE to finish because the "Asynchronously" check box is unmarked in the in the Report Interconnect call.

4.) Communication between the main and TC UBE:
The TC UBE has a Report Interconnect data structure:
INPUT parameters are the values of the "groupping fields"
OUTPUT parameters are: mnCountOfFormatFetched, mnCountOfRowInserted, cEndingCode.
I initialize my counters in the Process Begin event.
I increment mnCountOfFormatFetched each time when Format Fetched event called. I don't use the "Issue a write for this event" feature instead of I issue a Table I/O Insert and detect the success with "If FileIO_Status is EQ CO_Success" and increment mnCountOfRowInserted when it was successful. You can not use TableI/O in B7321 only UserInsert, further you can not detect the success of Insert with "If FileIO_Status is EQ CO_Success". This possibility started with B733x.
I map the "Grouping Fields" into the index field and I run an other counter TXTI map this also into the according index field to preserve the original order of rows. Further I map the whole row into a large text field based on NFLF data item. Further I extract the Transaction number part of the input row and map this value into a separate field in the output.

5.) I calculate the value of cEndingCode output parameter based on the counters in the Process End event.

6.) The Main UBE:
* Its primary section (called Control Section) is a hidden one without BSVW. Almost all of my logic is reside in the Do Section event of this section.
* There is a standard Page Header section extended with the current version of the main UBE.
* Ther is a conditional section , named "Processing Options" where I print all of the PO values in a readable format (e.g. interpreting blank, 0 and 1 as No and Yes)
* There is an other conditional section, named "Print Error" which containes all of the necessary identification informations like the order number of the input text row (generated by the TC), User ID, Batch Number (in Final mode only), Transaction Number (extracted in TC), Line Number inside the Transaction (generated in the beginning of the Control Section) and a long text variable for the description of the detected error.
There is a last UNconditional section, named "Print Results" which designate, that was it a Proof or Final run, prints the counters (e.g. ForamtFetched, Row Inserted into work table,Numbers of Transactions, Count of Data errors, Count of Rows with error, Count of inserted F0911Z1 rows, Count of F0911Z1 insert failed and finally a long ending status message which can be vary (I have approx 20 different) depending on the processing option settings (Final/Proof, Forcing the insert the rows with error, Forcing to keep the batch with error, etc.) and the detected errors if there was any.

7.) The Control Section:
* Initialize the variables like counter, constants (for Table I/O Insert F0911Z1 field), etc.
* Determines the values of "Grouping Fields"
* retrieve audit informations.
* Call the Proc Op conditional section.
* Delete the worktable records if any based on the grouping fields. Detects errer when it fails.
* Call the TC UBE.
* Detect errors when the "CountOfFormatFetched" counter is null or not equal with the CounOfRowInserted for the work table returned by the TC UBE.
* Loops through the records in the work table and determines the Line Number inside the separate Transactions using the appropriate index of the worktable. Detect ererors on the Update Table IO statement.
* Determine the Batch Number in Final mode using X0010 - Get Next Number BSFN call, and converts the numeric value to string (EDBT is a string type field).
Loops through the records again and:
* extracts the fields for F0911Z1 from the large text fields
* converts the values from string to the appropriate numeric or date type where it is necessary. Detects error when the conversion fails.
* Retrieves ANI Based on MCU, OBJ, SUB. Detects error when it fails.
*... makes some other action, convertion, and validations.
* Calls the Print Error conditional section on all detected errors with the appropriate error description.
* Insert F0911Z1 record based on PO settings and error status of the current row. Detects error when it fails.
AFTER the While Loop:
Delete the Batch (inserted records into F0911Z1 depending on PO settings, error status, Proff/Final mode). Detects error when it fails.
* Delete the worktable records if any based on the grouping fields.
* Determines the "Ending Status Message" for the "Print Results" section.

8.) When the primary and hidden control section finished the the undconditional "Print Results" section automaticaly printed on the end of the report output.

It was my answer briefly :)

Hope could help.
Let me know if you have further question or I wasn't enough clear (you know, my English).

Good Luck,
Zoltán

B7332 SP11, ESU 4116422, Intel NT4, SQL 7 SP1
(working with B7321, B7331, XE too)
 
Zoltán,

I do really appreciate your clear and ... hum ... brief reply! And I don't even try to imagine how a detailed one would show like ... :)
I have only (almost) one question; don't you have to populate the F0041Z1 Transaction Control table, too? Is it done with another section or another UBE?

Thank You Very Much,
Adrian Chimirel
Mississauga, Ontario

LIVE: B732.1 SP12.2, Oracle 805
SANDBOX: B733.3 SP3, Oracle 8i
RS/6000, Citrix, Win95&NT
 
Hi Adrian,
I tried to refresh my memory, how did we do it under B7321, because it is your version, further it was different than under B733x.

We do not populate F0041Z1 Transaction Control table.
We process the batch with R09110Z Journal Entry Batch Processor with the following processing options:
Store and Forward = NO
Purge = Yes.

If you process with NON Store & Forward then F0041Z1 record is not required.
In NON Store & Forward precessing the Purge = Yes is also required, because you can not filter the UBE based on F0911Z1 because its primary section based on F0041Z1 only. You have to take care that only one instance of this UBE is running in the same time.

If I remember well then we had had to resolve two (embedded :) problem.

1.) The Description/Label was missing for some PO field including the Store & Forward field. It needed some investigation with the PO and RDA designer to identify the PO values and supply the missing labels.

2.) Running R09110Z with non store & forward mode resulted always "no data selected". The cause was a duplication in data sequencing of the primary section of the UBE. I corrected the error with the following manner:
* open the UBE with RDA
* select the primary section of it
* select the sequencing of he section
* note the sequencing and delete
* save the UBE
* re-open the sequencing and re-apply it but without the duplication.
* Save & Exit RDA.
* Create a new version with Add (not Copy) and use this one.

Please, let me know that do you find these two problem on your site too? Thanks.

Do you have further questions?
Please, let me know also when you could make your upload successfully and could process it with R09110Z. Thanks.

Zoltán

B7332 SP11, ESU 4116422, Intel NT4, SQL 7 SP1
(working with B7321, B7331, XE too)
 
Hi Zoltán,
As a matter of facts I am not into developing the Journal Entries upload. I use it however, after another application developed ~in Access does the raw upload. Yes, it is an export
~from Access (where the SBT data <read Invoices and Journal Entries> gets imported :) to Oracle). I do not have any of your two (embedded :) problems, here.
I studied both batch processes (Invoice & S&F Sales Orders :) and my final option was uploading S&F Sales Orders; the reasons are:
1 - our need to have both fields SoldTo & ShipTo (AB#) available AND
2 - there is already a customized report in place, able to print these Invoices(SOrders).

The new Business Process follows:
a - Built the H&D Access tables (with imported structure from JDE) and populating them from the INPUT spreadsheed (seen as a linked table in the same database)
a - TC-ing the F4001Z Header & F4011Z Detail from Access to JDE and
b - Populating the F0041Z1 using another TC between F4001Z and F0041Z1.
c - Finally running the R40211Z Sales Order Batch Transaction that actually translates data from the Z files and pushes it into the "real" garden - that is F4210 & F4211 and so on.
The bumps in the road where handled carefully, along with
1 - a HUGE help from Rafal Molas who even found the <files + energy> to send me the tables mapping and supported professionally the specific format data had to be entered into these tables
2 - a mini-bunch of SARs (more than twenty ;-) to be applied (and not only at the R40211Z level but also P4004Z, V4011Z, B4200310 and the list never ends here, you get the picture :)
3 - a LOT of patience into having the Response Line telling me that, you know, the B732 is :( too old and) not supported by Denver anymore, but I can call our Client Manager, setup some consulting ... the microsoft stuff everybody knows and nobody agrees with.
When all of it was finally in place (tested and working OK ;-) and I was heavily contemplating how the user should be dealing with as minimum as possible the number of interfaces, voila, our ENTHUSIAST Zoltán already publishes a solution!

Now I have two nice roads to follow:
i. B7321 @#%@!! Map the TCs & the Batch processor Locally and call them from the MAIN UBE - remember I don't have parameters to pass - but, it still is challenging. Currently is not running! But we can continue our discussion, can't we?

ii. B7333 !!! Maybe next week? Start developing a Zoltán(brand) new MAIN UBE, with a nice working table ... and so on and so far ...!!!

Now, is it not fine to have a little help from your JDELi ... hum JDEFriends?

Thanks again, Maestro!
Adrian

LIVE: B732.1 SP12.2, Oracle 805
SANDBOX: B733.3 SP3, Oracle 8i
RS/6000, Citrix, Win95&NT
 
Hi Adrian,
At first, thanks for your answer.
Of course, we can continue our discussion, supposing it could be interesting not just for us only.

If you plan to create a MAIN UBE to call your TC and although you don't have parameter to pass, still you have to create a dummy Report Interconnect data structure in your TC, because the existence of this RI DS make possible to call your TC via Report Interconnect. Of course, you do not have to pass or get back anything when you call your TC from MAIN.

Just now, a nice job is waiting for me - creating a brand new AR interface where 2 or 3 OW table will be affected.
Currently I do not know too much about this job yet, one of our very experienced application consultant guy will describe the base part of the data movement logic for me (which fields have to fill and with wich value)(...and I will have to guess the remaning others? :)) I will meet him tomorrow. Unfortunately he spend almost all his time at our client(s) site, further our dead-line is very close. I suppose, I will have less time for the Forum in the near future for a while but I will listen it.

There is no doubt about we will have several problems to resolve, so I suppose, I will be on the other side of the Forum for a while, posting more questions than answers and hoping help.

At last, apologize for if the second part of my post was a bit personal.

Regards,
Zoltán


B7332 SP11, ESU 4116422, Intel NT4, SQL 7 SP1
(working with B7321, B7331, XE too)
 
Top