Scheduler for PY Environment



I feel like this topic has been touched on a lot but I am missing something or just being dense. We need to do testing of a customized auto-cash program for an extended period and would really like to have a number of job's run automatically in our PY environment. I have seen posts regarding having multiple scheduler kernels running for different environments but Doc ID 1323621.1 clearly states this is not supported. It goes on to state that you would have to configure PY on the PD server. So to configure PY on PD server I simply need PY910 added to the environment list in the Machine Identification, OCM mappings for both machine's data sources mirrored for PY910, and to do a PY full build but select the PD batch server when defining/deploying? Am I missing anything? Would I also need to configure and deploy to PY Logic server? What is the reasoning behind disregarding Oracle's support stance and instead running multiple scheduler kernels?

Thank you,



Well Known Member
We are on MSSQL / Windows platform. E1 Release 9.0 with tools 9.1.5

From what I recall from a long time ago, JDE scheduler does not support multiple environments. However, there is a workaround that works for us.

We schedule a ton of jobs that run throughout the day and night. To support that, we created a dedicated production batch server that serves as a 'scheduler' server where the scheduler kernel runs - for production. We don't run any UBE's jobs on it. its sole purpose is to launch the scheduler jobs which runs on our production batch servers of choice (we have three)

As a part of our JDE landscape, we also have a separate batch/logic server just for non-production (DV and PY) environment.

To make the scheduler work in PY, we simply copy the PY900 (in our case) folder from the DV/PY batch server onto the scheduler server, and schedule the jobs for PY in the scheduler - that's it.
Every time you do a new full package for PY, be sure to copy the updated folder onto the scheduler server, overwriting the existing one.


Legendary Poster
The Scheduler Tables have an environment column which is populated.

I have note checked this (and I don't have time at the moment), but I suspect that the current environment when the job is entered/scheduled would be the environment where the job is run.

Our test/development installation has multiple environments and I'm sure I have scheduled jobs in two environments and had them run properly.


VIP Member

I think that Oracle is keeping things simple on purpose to avoid having to support failed attempts at running split system data sources.

When they phrase things this way it leaves some open questions:

"The only configuration of the EnterpriseOne Scheduler that Oracle supports is a single Scheduler kernel, running against a single set of Scheduler tables (F91300, F91310, F91320, F00085, F00085A) in the System data source. The server OCM mappings for the Scheduler tables will need to point to the System data source."

For instance, if you are running Non-Production and Production systems as entirely separate entities, "islands" as I call them, with their own deployment servers, system and server map data sources, are Oracle saying that they only support you running a scheduler on one of those islands? From my perspective that would be a silly constraint. They are effectively separate JDE instances.

I would need to know more about how your system is configured to give an informed answer but I'll give one with the following assumptions:

1) You are running separate Non-Production and Production enterprise (logic and/or batch) servers.
2) These servers while separate, are sharing a single System - 910 data source and are therefore looking at a single set of F913xx/F00085x tables.
3) The servers have separate server map data sources. This may be either a server map for non-production servers and another for production server or a separate server map for each individual server.
4) Your scheduler is running from the PD batch server.

In this scenario, yes, you would need to define the PY910 environment (OCM mappings and logical data sources for the PY server/s) within the server map of the PD batch server. The need to have the PY910 path code on the PD server is so that it can fetch the batch version at submission time. You could then include the PD server in your PY package build/deploy to make sure that it is always pointed at the same specs. Cloning the PY910 folder from PY server to PD server after each full package build as cncjay mentioned is another (easier) option. I personally clone package deployment a bit differently. I try to duplicate what the full package deploy does. I would update spec.ini file in the PY910 folder on the PD server with the new PY full package name. I would then delete glbltbl/dddict spec files and the runtimeCache folder. Finally, I would update the package subscription record for the PD server/PY910 path code with the new PY package name. (F96511 SKMKEY=PDSERVER, SKPORTNUM=6016 (9.1 default), SKPATHCD=PY910, SKMCHDETTYP=31, update SKSERSHP to the name of your new PY full package.) Depending on your setup, having a reference for the PY910 path code in the F96511 that is inconsistent with other enterprise servers running PY910 can cause your PY JAS server(s) to periodically flush their serialised objects because they think a package deploy is in process.

I personally always try to run separate System datasources for non-production and production. In that case I will have two sets of F913xx tables. That allows for two schedulers to be used, one for non-production and one for production. It does mean that you can only maintain jobs via P91300A for non-production from a non-production environment and production jobs from a production environment but from my standpoint that is the whole point of separating non-production from production.

If you don't want to run fully separate system datasources you could still certainly have separate copies of the F913xx tables stored outside of SY910 and then OCM map to that separate copy for PY in both System - 910 and Server Map OCMs. This would then allow you to spin up the scheduler kernel on the PY server and have it launch jobs for PY. Note that the default environment in the JDE.INI on the PY server is likely the environment that would be used to lookup OCM mappings for the F913xx table by the scheduler kernel. So you would need to make sure it points to PY910 so that your new OCM mappings would be picked up. What you don't want to have is the PY server looking into System - 910 for the F913xx table since it will start picking up jobs for Production, fail to launch them and generally conflict with your PD scheduler.

To answer your final question,

Q: What is the reasoning behind disregarding Oracle's support stance and instead running multiple scheduler kernels?

A: You would run multiple scheduler kernels in order to maintain complete separation between non-production and production. The production servers would require no trace of PY on them and have no interaction with the non-production servers. The non-production batch server would run its own scheduler kernel and manage jobs for PY.