UBE Runs out of Memory

DBohner-(db)

Legendary Poster
Howdy,

Got this UBE that blindly reads 8.7M records, level breaks a couple times and populates a couple summary tables.

About eight hours into processing, it logs a message whining that is done thinking about the whole process... does some more whining and eventually ends in error....

<snip>

File: ../oracle/dbbndout.c Line:77 iParam: 0000075048
18874620/1 MAIN_THREAD Thu Aug 22 04:48:42.921089 dbbndout.c778
OCI0000020 - Out of memory for allocating return values

18874620/1 MAIN_THREAD Thu Aug 22 04:48:42.921143 dbbndout.c784
OCI0000021 - Error - DBOCI: insufficient memory to process request

18874620/1 MAIN_THREAD Thu Aug 22 04:48:42.921192 dbinitrq.c1015
OCI0000145 - Failed to bind output values - SELECT * FROM CRPDTA.F564201 WHERE ( SHDOCO = :KEY1 AND SHDCTO = :KEY2 AND SHKCOO = :KEY3 )

18874620/1 MAIN_THREAD Thu Aug 22 04:48:42.921225 dbinitrq.c1022
OCI0000146 - Error - ORA-00000: normal, successful completion

18874620/1 MAIN_THREAD Thu Aug 22 04:48:42.921263 jdb_drvm.c913
JDB9900168 - Failed to initialize db request

Created memory diagnostics in file </u01/jdedwards/e910/log/jde_18874620_1377161322_1_dmp.log> iParam: 0000000000
Created memory diagnostics in file </u01/jdedwards/e910/log/jde_18874620_1377161322_2_dmp.log> iParam: 0000000000
...

</snip>

I know, the 'easy' fix is to partition the table into smaller chunks (say 2M records... then less if it keeps failing)..., that is easy enough to do - a couple hours work.

However - I'm curious if there is a setting, or something I might do, that will relinquish the frustration this UBE has with its memory? This won't be the biggest table that is processed on a nightly basis.

Any thoughts, dreams or schemes - greatly appreciated

(db)
 
It could be there's a memory leak somewhere in the UBE. Are you calling any BSFN's that allocate memory but then never release it?

It sounds like you're doing a lot of table I/O. Are you using a handle for the table, or using just Fetch/Fetch Next commands? Whenever I have a lot of table I/O, or execute a table I/O-intense UBE section repeatedly, I prefer to create and open a table handle, and of course close it when I'm done processing.
 
Is it also creating a very large PDF?

I have seen this on UBEs that process a lot of data when it also outputs a PDF listing all the records it processed. Turn of ALL output or change the code to not output all the details and see if you still have the problem.

Other wise, I would start to look for a memory leak as Don suggested. I think, however, the only mem leaks you can really fix are those in a BSFN. If it is a mem leak in the ube engine itself, Oracle will have to get involved.
 
Do you have OPEN and CLOSE statements around your table IO Selects

If not it will slowly increase the memory allocated to the job. Especially if the select statement is used often in the DO section for example
 
I think Brian wins the kudos, this round.

Once I turned off all PDF Output, the L-O-N-G-R-U-N-N-I-N-G job completes...

Also - am I to understand that Select statements should be end-wrapped with a close? I've read that it should occur - I haven't seen it in Oracle's code..., though...

(db)
 
Great, glad it solved your problem. I think what happens with the PDF thing is that the JDE UBE engine builds the entire PDF in memory and THEN flushes this buffer to a file when the UBE is complete, instead of streaming the PDF content to a file as it builds it. So basically anything that creates a big PDF is going to consume a ton of RAM on the enterprise server while the UBE is running.

I am sure Oracle is most likely using a third party lib for the PDF generation so they are probably at the mercy of the lib provider.
 
Glad you fixed it, but is F564201 is in ER table IO at all?

Looks to me like you may benefit from OPEN CLOSE statements around your selects and fetches etc.
 
The F564201 is in the table ER - and, now, it was not surrounded by Open/Close.

Going forward - is it a globally-good idea to surround extended Select(s) with the Open/Close? The practice isn't in the manuals, though it does get covered on the list a few places.

Thanks, JohnD

(db)
 
You're welcome Daniel

No it isn't covered in the manuals but a techy geek did explain it to me and it makes sense
Each time we do a select a chunk of memory is dedicated to what we get back from the DB
We then read through each record with a fetch next.
If we don't OPEN CLOSE, these build up

Incidentally if you check your debug log for what happens to a Fetch Single, it is always in this order
Open
Select
Fetch
Close

Who knows? But no harm in doing it
smile.gif
 
Back
Top