Are package updates clogging up IFS Replication?

Crazy_About_JDE

Crazy_About_JDE

Well Known Member
Hello, Echo2 users-- I have observed what seems to be really clogged up IFS replication. Choosing menu option 1.5 (IFS Monitor) on the backup server, I see a gazillion pending queue entries. Then when I press F20 to view queues, two FTP jobs have high entries -- and the FTP09 job's entries haven't risen or fallen all day. (Normally these numbers jump around.)

I've checked for MSGW on the jobs and have checked QSYSOPR messages, but found nothing.

The folks at iTera/Vision support are telling me I just need to wait it out, but I suspect deployed update packages (or whatever is changing the XDB and DDB objects in /pd7334/specfile) are causing journal entries to pile up.

What do YOU think?



>> Screenshot after menu 1.5

BACKUP 1 Echo² E27585RE
Monitor IFS Replication for CRG 1/31/07
QPADEV0003 12:03:51

Most Recent Audit...... 01/31/07 00:23:18 Elapsed: 11:30:35

Error Count................... 266

Previous Active Requests...... 0
Current Active Requests....... 0

Previous Pending Queue Entries 1,944,296
Current Pending Queue Entries. 1,944,296


Command Log................... 2,293



>>> Screenshot after pressing <F20>

BACKUP 1 Echo² E27587RE
Monitor IFS Replication for CRG 1/31/07
QPADEV0003 FTP Request Queues 12:05:46

Job ID Job Name Entries Last FTP command
FTP01 C1FTPFTP01 0 ! CHGATR OBJ('"/OneWorld/Packages/SPU012507B/text/
FTP02 C1FTPFTP02 0 ! CHGATR OBJ('"/OneWorld/Packages/SPU012507B/text/
FTP03 C1FTPFTP03 0 ! CHGATR OBJ('"/OneWorld/Packages/SPU013007/text/g
FTP04 C1FTPFTP04 0 ! CHGATR OBJ('"/OneWorld/Packages/SPU012607/pack"'
FTP05 C1FTPFTP05 0 ! CHGATR OBJ('"/OneWorld/Packages/SPU012507B/text/
FTP06 C1FTPFTP06 0 ! CHGATR OBJ('"/OneWorld/Packages/SPU013007B/text/
FTP07 C1FTPFTP07 231,088 get "/PD7334/specfile/gbrspec.ddb" (replace
FTP08 C1FTPFTP08 0 ! CHGATR OBJ('"/PD7334/specfile/jdeblc.ddb"') ATR(
FTP09 C1FTPFTP09 1,243,386
FTP10 C1FTPFTP10 0 ! CHGATR OBJ('"/OneWorld/Packages/SPU012507B/text/
FTP11 C1FTPFTP11 0 ! CHGATR OBJ('"/OneWorld/Packages/SPU012507B/text/
 
It will clog the system, since the rate of change is pretty enormous and keep in mind that for every bit that changes, E2 will resend the whole file again!!!.
You need to move to the new IFS system, which uses journaling functionality. You will see a world of difference.
The other option that you can do is to turn off the replication of the /Oneworld directory and then replicate manually the runtime specs, they only change at the time of package deployment anyway. This will reduce your replication time and hence the clogging of the network.

Hope this helps.

Thanks.
 
Thank you! Not long after I posted my original message, I called our partner and asked them to help us upgrade to the new IFS just to see if it would work!
smile.gif


We got it in my the skin of our teeth (Tuesday night), so here goes nothin'.

To make sure I fully understand what you meant about "runtime specs" -- do you mean I only need to manually replicate the /PD7334/spec folder? That would be *awesome*.

Thank you again!

-Tim
 
You are welcome. Have the /pd9 sync manually and then you can actively sync the PD9 library.
Let me know how the roll swap goes for you.

Thanks.
 
Tim,

We had the same issue on our specs folder of IFS. The good news - once we upgraded to the new version of IFS replication, we have no more issues with it getting millions of entries behind. We replicate the entire PD7334 directory with no lag.

Steve
 
Hi Steve,
we are now implementing Vision; which is the meaning of "new version of IFS replication"? Are there particulary component versions of AS400? Do you know some special PTF ol level that can be solve issues?
Thank you

Simone
 
The product id is 7PA2K30 in the old iTera days. Not sure whether they have renamed it after they became Vision.
 
If you are using FTP processes to replicate your IFS then you are using the "old" version. If you are replicating IFS like other libraries with journals, you are using the "new" working method. Using the FTP method, we would consistently stay millions of entries behind.
 
Back
Top