Jim,
I think that Oracle is keeping things simple on purpose to avoid having to support failed attempts at running split system data sources.
When they phrase things this way it leaves some open questions:
"The only configuration of the EnterpriseOne Scheduler that Oracle supports is a single Scheduler kernel, running against a single set of Scheduler tables (F91300, F91310, F91320, F00085, F00085A) in the System data source. The server OCM mappings for the Scheduler tables will need to point to the System data source."
For instance, if you are running Non-Production and Production systems as entirely separate entities, "islands" as I call them, with their own deployment servers, system and server map data sources, are Oracle saying that they only support you running a scheduler on one of those islands? From my perspective that would be a silly constraint. They are effectively separate JDE instances.
I would need to know more about how your system is configured to give an informed answer but I'll give one with the following assumptions:
1) You are running separate Non-Production and Production enterprise (logic and/or batch) servers.
2) These servers while separate, are sharing a single System - 910 data source and are therefore looking at a single set of F913xx/F00085x tables.
3) The servers have separate server map data sources. This may be either a server map for non-production servers and another for production server or a separate server map for each individual server.
4) Your scheduler is running from the PD batch server.
In this scenario, yes, you would need to define the PY910 environment (OCM mappings and logical data sources for the PY server/s) within the server map of the PD batch server. The need to have the PY910 path code on the PD server is so that it can fetch the batch version at submission time. You could then include the PD server in your PY package build/deploy to make sure that it is always pointed at the same specs. Cloning the PY910 folder from PY server to PD server after each full package build as cncjay mentioned is another (easier) option. I personally clone package deployment a bit differently. I try to duplicate what the full package deploy does. I would update spec.ini file in the PY910 folder on the PD server with the new PY full package name. I would then delete glbltbl/dddict spec files and the runtimeCache folder. Finally, I would update the package subscription record for the PD server/PY910 path code with the new PY package name. (F96511 SKMKEY=PDSERVER, SKPORTNUM=6016 (9.1 default), SKPATHCD=PY910, SKMCHDETTYP=31, update SKSERSHP to the name of your new PY full package.) Depending on your setup, having a reference for the PY910 path code in the F96511 that is inconsistent with other enterprise servers running PY910 can cause your PY JAS server(s) to periodically flush their serialised objects because they think a package deploy is in process.
I personally always try to run separate System datasources for non-production and production. In that case I will have two sets of F913xx tables. That allows for two schedulers to be used, one for non-production and one for production. It does mean that you can only maintain jobs via P91300A for non-production from a non-production environment and production jobs from a production environment but from my standpoint that is the whole point of separating non-production from production.
If you don't want to run fully separate system datasources you could still certainly have separate copies of the F913xx tables stored outside of SY910 and then OCM map to that separate copy for PY in both System - 910 and Server Map OCMs. This would then allow you to spin up the scheduler kernel on the PY server and have it launch jobs for PY. Note that the default environment in the JDE.INI on the PY server is likely the environment that would be used to lookup OCM mappings for the F913xx table by the scheduler kernel. So you would need to make sure it points to PY910 so that your new OCM mappings would be picked up. What you don't want to have is the PY server looking into System - 910 for the F913xx table since it will start picking up jobs for Production, fail to launch them and generally conflict with your PD scheduler.
To answer your final question,
Q: What is the reasoning behind disregarding Oracle's support stance and instead running multiple scheduler kernels?
A: You would run multiple scheduler kernels in order to maintain complete separation between non-production and production. The production servers would require no trace of PY on them and have no interaction with the non-production servers. The non-production batch server would run its own scheduler kernel and manage jobs for PY.