Any relationship between JDEIPC avgHandles and kernel process count?

Wayne Carmichael

Member
Hi,

We're running Xe SP23J1, iSeries, V5R3, DB2, WAS 5.0.2, "everything on one server" configuration.

Business functions started failing in production because of lack of IPC handles and Oracle told us to increase the JDE.INI's avgHandles setting. E.g.

[JDEIPC]
avgHandles=200

We run around 240 kernel processes in production. What I would like to know is if there is a relationship between the avgHandles setting and the number of kernel processes you are running. Oracle's answer has been to talk to our field representative about performance tuning.

It would be nice to have something more rigourous than a 'rule of thumb' if such a relationship does exist.

Thanks,

Wayne
 
I found the following solution ID : 200818718 on the KG :

For the "Invalid Handles" error message, there is a server JDE.ini setting that will increase the number of pre-allocated handle state structures. Increasing this might allow the customer to run longer before running out of them. However, it will also increase the amount of shared memory that the EnterpriseOne instance uses.

[JDEIPC]
avgHandles=100

The default value is 100 which we multiply by the maximum number of EnterpriseOne processes allowed (1024), which gives us 102400 IPC Handle State structures.

Each of these structures is 24 bytes (I believe), so the total shared memory usage for the IPC Handle State structures is 2457600 bytes (about 2.5 Meg). The customer could increase this value incrementally starting at 200 to ensure that the shared memory can be created.

SAR 6539120 also describes the issue if the Enterprise Server runs out of IPC Handle State structures, things start to fail (e.g. you can't log in). The code has been modified to allow the OneWorld instance to continue operating correctly after running out of IPC Handle State structures. This SAR will be included in SP23. The IPC Handle State structures are used for multiple purposes:
- to keep metadata on the state of handles to IPC resources (used to diagnose deadlock situations)
- to clean up IPC resources when OneWorld processes exit unexpectedly

Running out of these IPC Handle State structures is NOT a good thing, but it shouldn't prevent the system from working correctly (you just might leak IPC resources at some point if processes are existing unexpectedly). The main issue is that running out of space for information used for diagnostic purposes and cleanup purposes shouldn't prevent the system from working correctly

In regards to the eTimeOut and eIPCNotFound errors, try restarting services on daily basis or increase the JDENETTImeout value as well. You can modify this jde.ini parameter in the client:

[NETWORK QUEUE SETTINGS]
JDENETTimeout=360

This value is in seconds, and the default is 60 seconds
 
Thanks altquark, your post reminded me that I did look at that solution id when we were trying to figure out how to fix the problem ... although, at the time, I was focusing more on the increased memory requirements.

So based on the fact that 1024 is always used for (kernel) process count, it looks the better question would be what determines IPC Handle State structure usage and is there a way to monitor it (SAW - windows version?).
 
Back
Top