Server OCM Logic

millsh

Member
Can someone explain to me how server OCMs work? We have heard differing explanations from Oracle and independent consultants.

Scenario 1. A fat client calls a business function that according to the system OCM should run on the server (as/400). So the business function runs on the server and calls another business function that should also run on the server but there is no entry in the server OCM for this specific business function nor is there a default business function entry. So what occurs at this point? Does it run it by default on the server? Does it go back to the system OCM to determine (which does have an entry to run on the server) after which it runs on the server?

Scenario 2. A fat client calls a business function that according to the system OCM should run on the server (as/400). So again the business function runs on the server and calls another business function that according to the server map should run locally. Does this called business function in fact run back on the fat client? Or does local in a server map mean local to the server – that is run here on this server?

Thanks very much.
 
Scenario 1:
When a function running on the server calls another function, that function will also run on the server, no matter what OCM mapping may exist for the second function.

But if a function running on the client, calls another function that has an OCM to run on the server then the logic for the second function would execute on the server.

Scenario 2: Server Map OCM's never have entries for BSFN's. If any logic is being excuted on the server, any subsequent function calls from that logic will also run on the server.


CNC Veterans on the List...your thoughts / comments
 
That's an interesting discussion. I'm sure, there're a few combinations, which noone probably has ever tried, because it's neither practical, nor usually necessary.

E.g.: if a client runs a BSFN on the SERVER1, which calls another BSFN mapped in this server's Server Map to SERVER2. There could be two scenarios here: it either ignores the map and runs it on SERVER1, as ice_cube suggested, or actually uses the map and runs the second function on SERVER2. I'd expect the latter, but who cares - I don't expect to ever need to do this.

Anyway, Server Map usually has lots of LOCAL mappings, created by the Installation Plan, which I always delete every time I see it. Such mappings, of course are redundant, because the functions will run on the same machine regardless of their presense.

Hence, in both cases in question, the second function will run on the same Server, just as ice_cube said...
 
Alex,

I've actually done this. I have a business function which absolutely, positively does not run on AIX. The XT4312Z1 business function (pre-baseline ESU PF2597) causes a failure on AIX as soon as it hits the F4312EditLine function XT4312Z2 and calls the B4302510. Boom, core dump. In the server map for SERVER1, I mapped B4302510 to SERVER2 (which is HP-UX, not AIX) and when the PO Receipt is executed on SERVER1 the JournalEntries BSFN is executed on SERVER2. This keeps the app from bombing out when PO Receipts are run. This only affects PO Receipts if Receipt Routing is used or if the entire process is executed through XML or JConnector (like DSI or webMethods accessing the Enterprise Server).
cool.gif
 
Charles,

Great, this nicely confirms my expectations. Thanks for sharing.
 
Yeah, but I agree that it isn't practical just as you stated. I didn't leave it that way for production because there are too many ways that a Call Object kernel on SERVER1 can go down if the remote server (SERVER2 running HP-UX in my example) were to be unavailable (network, JDE app restart, server reboot, etc.) while a transaction was currently executing on SERVER1.

I did it merely to prove a point to the vendor that the business function doesn't function properly on all platforms...which is the concept behind CNC in the first place. It isn't quite "write once, run anywhere", but it is very close.
 
Thanks for sharing that Charles, I wanted to try that too but never got to it...Denver always said that Server Map OCM should not have BSFN mappings...so thought it may not work...
 
So let me get this straight.

BSFN1 is running on SERVER1 and calls BSFN2.

(1)If there is no mapping for BSFN2 in the server map, it will also run on SERVER1.
(2)If there is a mapping for BSFN2 in the server map that says to run locally, it will run on SERVER1.
(3)If there is a mapping for BSFN2 in the server map that says to run on SERVER2, it will in fact run on the SERVER2.

One of the reasons I am asking is similar to what you have described. In my case, I have Vertex BSFNs that will only run on a specific as/400. The server that I am currently working with is an NT app server and these Vertex BSFNs will not run here (the vertex load libraries do not exist). So I want to map these to the as/400 and based on your comments, it sounds like I should be able to do this. Correct?

One other OCM question. You reference DSI and XML and we are currently exploring going to XML for DSI. We have been told that when running DSI XML, no system OCM is used - you just point to the box you want to run on. Is this true? And if it is, once a BSFN is running on that box, will it then use the server OCM for BSFNs that are called from within this BSFN?

Thanks to all for the explanations and thanks to you for indulging me a bit further.
 
You should be able to map the Vertex functions from one server to another - but I've not used AS/400 so your mileage may vary. That being said, I don't see any reason why it wouldn't work so long as you tweak the configuration for your needs and test thoroughly in a development environment.

To answer your second question, you have been correctly informed. There is no system OCM to worry about - the DSI application is configured per function call and they can be toggled between OWFC (OneWorld Function Caller) and XML.

In the case of OWFC, you can also choose to implement OCM mappings to the same server you would point to for transactions based on receiving an XML document. This can resolve some performance issues, especially those where you are limited by the number of concurrent threads in the local JDE fat client.

As far as the XML function caller, I spent much time implementing this solution late last year and the performance has been outstanding. Granted, we purchased a couple of pSeries 550's and loaded them with 12GB of RAM, but performance is also decent when pointed to a Windows app server. The limit to Windows is the amount of memory you can address - which means if you have memory leaks present in any of the code accessed by DSI, you're in for a real "treat".

I wouldn't go back to OWFC if my life depended on it. We were able to reduce the number of production TranServers from 17 to 5 by making the transition. Performance is markedly improved, help desk calls have been reduced by one half and the number of API Call timeouts has dropped substantially - some days we see zero.
 
Thanks very much for the information. It was very well timed. We just ran a benchmark test this afternoon for DSI XML on an as/400 and NT app server. Both eventually performed about the same. It seemed the more we ran this particular test set of transactions, response time improved. My guess is that initially there is some overhead in starting kernels and building cache. In this particular test, one as/400 enterprise server has the path code and data and another as/400 enterprise server has the system tables. So I am assuming that once all the security and any other system related data had been retrieved into cache and all call object and network kernels started, that added overhead disappeared. It definitely looks like a lot of memory is needed. Hopefully we will be able to implement this soon. We currently have about 40 transervers in our production and user acceptance environments which, as you know, is a maintenance nightmare. It's good to know there are other large DSI customers out there. Thanks again.

Harold Mills
 
There is definitely some caching going on and performance always improves after the first few transactions have executed on each Call Object kernel. What I've deduced from the difference between OWFC and XML based transactions is that there is environment initialization time spent for every DSI_OWSA.exe thread, not much time at all for XML transactions against a kernel. I can tell you that in our installation, the average transaction time dropped from more than 15-25 seconds to less than 5. Granted, we have a hell of a lot of data in the production database that needs to be groomed, purged and archived, but it is saying something for the technology to improve performance that much without the requisite data activities.

We average around 110,000 DSI transactions per day. To accommodate this load and to account for simultaneous transactions, we configured 30 Call Object kernels to auto-start and set the maximum to 60. We setup half this amount for the XML dispatch kernel; this accounts for a lot of the initial RAM utilization.
 
Back
Top