Same Process ID, different job

TimPierce

Reputable Poster
Can anyone think of a scenario where 2 UBEs launched separately can end up with the same Process ID in Submitted Jobs?
 
If you're referring to the PID - thats the process ID that the operating system provides to the UBE Process. PID numbers get reused on various platforms, so its likely that the PID will be the same for a few jobs over a long period of time. Its not uncommon, and there should be no cause for concern.

I'd be very concerned if the same JOB number was assigned to two different UBE processes on the same box though.
 
The process number will be the same if a synchronous report interconnect is performed.
 
If you're referring to the PID - thats the process ID that the operating system provides to the UBE Process. PID numbers get reused on various platforms, so its likely that the PID will be the same for a few jobs over a long period of time. Its not uncommon, and there should be no cause for concern.

I'd be very concerned if the same JOB number was assigned to two different UBE processes on the same box though.

Right, got it.

The problem we have is that we have customized code that reads the F986110 based on the ProcessID (PID) to find the originating UBE for a sequence of synchronous UBEs. It then updates the F986110 record for the originating UBE with a new PDF name (FUF2 field).

Yesterday there was an instance where it updated the wrong F986110 record because another process for the same UBE launched 1 hour before had the same Process ID.

We're just upgrading to 9.2 and this coding seemed to work perfectly in 9.0, so I'm guessing it's either a co-incidence that the PIDs were the same or the way the PID is obtained is slightly different in 9.2 (or on our new 9.2 server).
 
What is the OS of your server? It is highly unlikely that you will get the same process id (unless those report submissions are significantly apart by time)
 
I think its co-incidental that this hasn't occurred in the past with your 9.0 system. I think that the code you're using is susceptible to error - the right thing to do is to re-think the code you're using. In the meantime, you may want to place your jobs into a specific queue, and have the custom code run only on that specific queue ? Perhaps that might help your filter somewhat ?
 
Back
Top