JDE 9.0 on WAN

Chan Rana

Chan Rana

Legendary Poster
We are thinking to have the JDE 9.0 move to data center offsite on WAN. As part of test we moved our test servers and found that the response was very poor and unacceptable basically due to the System DB not being local to test servers. That being said we are trying the best approach to have JDE servers across WAN. Below are some of the questions I am looking for an answer or discussion
-Is it feasible to split DV & CRP on LAN and Prod on WAN or vs?.
-Does anybody has this architecture working and what are the areas of concerns to get the reasonable performance/response?.
-Do you perform data replication across servers?
-Pain points on dual maintenance?

Would like some inputs from the experts in this fields.

Thanks,

Chan
 
God - this is going to be a nightmare for you. I've seen companies attempt to split data centers between Production and Test/Development - and when it comes to JDE, its a total nightmare. To be honest, its a total nightmare all around. As far as JDE is concerned - where do you position the deployment server ? Where do you position system tables, data dictionary, object librarian etc etc ?

JDE isn't designed to have certain environments work independently of the entire system. Thats why we have a single "system" datasource. No matter what you do, one side or the other is going to suffer performance issues.

To be honest, the absolutely only way to do this reliably is to run two different implementations of JDE, and have certain tables (such as security) replicated between the two systems. Two deployment servers, two of everything. This results in the cost to manage your implementation rising considerably. At that cost should offset the estimated "savings" of having instead a properly architected disaster recovery environment.

And then, what is the benefit of splitting DEV/TEST vs PROD ? Why not just have a second implementation installed as your disaster recovery option ?

Test/CRP as a disaster recovery site IS going to be a disaster for your company. And, of course, it doesn't work. Because production isn't replicated back to test/crp. Whoever the bozo is that is coming up with this needs to be educated. They need to understand "development IS production". Because it costs you $$$ to maintain development - and if development isn't being properly regarded in the same way as production (not data, but code) - then your company is going to have a huge disaster on their hands.

Just say no. Customers are NOT always right. Especially in this regard.
 
Jon,
Yes based on the test we did I would say no no, but want to know what others are doing around just to make sure that I am thinking in right direction.

Chan
 
By the way - if the customer is on 9.0, they need to buck up their ideas and upgrade to 9.2 or at least have a plan ! :)
 
A customer of ours did something similar. They have a single global E1 instance but have development in one region and the production instance in another. To support this they split their E1 instance in two. The development site had PS910, DV910 and PY910. It had its own local deployment server and database etc so was a self contained instance. The process worked by an OMW project being created in the production instance with the objects either created or check out. This project was then backed up as a PAR and restored into the development instance. The developers would work in DV910, promote to PY910 for system testing and once signed off the objects would be packaged back up as PAR backups and moved back to the production instance which had PS910, DV910, PY910, UA910 and PD910 path codes. In some situations changes were also made in the production instance directly and then these would have to be moved back to the development instance as well. The biggest trick was trying to make sure that the tokens and checkouts matched between the two instances so everyone knew who was working on what. The other major pain was that they had 7TB of business data so if you wanted to refresh the development instance data it required physically taking an external HDD from the production instance to the development instance. They did not keep anything else, including versions, in sync. All version, menu, UDC etc changes were done in the PY910/JPY910 environment on the production instance.

As you have already discovered, trying to split a single E1 instance between data centres is not going to work due to network latency.
 
Thanks Jon and Russell for the feedback. Finally we will be going towards keeping all at same location instead of split to keep it simple.

Thanks again.

Chan
 
Glad to hear that common sense prevailed !!! Hopefully others will take note - and to use this thread as a basis for any argument in splitting up datacenters !

If you want low-cost Disaster Recovery - talk to a cloud provider like AWS - don't impact your production/development environments !!!
 
Well its not common sense but more of business sense. But my next question would be what others are doing for DR?. Logically Dev and CRP would be more cost effective hosted locally in-house then hosted in datacenter with network lag.

Chan
 
Back
Top