I don’t want to get into the ‘to VM or not to VM debate’, it’s a company standard here now, almost everything which is being updated is being virtualised. Oracle advocate it on the cloud computing paradigm, mind you I’m old and cynical enough to recall doing ‘cloud computing’ eleven years ago with Citrix!
Currently I have 80 users at peak time over 2 quad core OAS boxes and it barely touches them. I also have massive application and batch servers; average utilisation is less than 10%. In 2010 and 2011 that will change as we change a bespoke legacy system into JDE and the number of users by the end of 2011 will have changed to 800 with around 600 concurrent at peak times when the US and UK time zones cross over. The business itself has a very low transaction level, in fact I'm astonished at how low, and most of the activity is browsing.
My design has to take into account some form of failover, and personally I prefer an active\active model wherever possible as in standard clustering models there is a very expensive box sitting there doing nothing. JDE doesn’t do seamless failover (yet), I know it can be emulated, but I’ve set an expectation of around 10 minutes outage plus callout time, which is massively better than the legacy system could do, and given that two years ago an outage of more than an hour on JDE happened at least twice a month is a considerable improvements. Its enough for us, its not like a production line will stop if the system is down.
So my design (and bear in mind this won’t actually be built for a few months) calls for 10 or less WebLogic Servers, virtualised, since it isn’t even out until March I’m not going to get any figures for it at the moment. Using OAS experimentation says that I will need around 12-14 servers. (on the whole I’m not best impressed with OAS, but it’s easier to maintain than WebSphere.)
Behind that will be two BSFN Application servers and three Batch servers, one general and the other two feeding two Formscape servers through the OSA by function module.
The SQL server will be real. The deployment server can be virtual. There’s also a development system to take into account, the SQL again this will following the model, real tin for SQL, virtual for everything else. The whole lot will run Windows 2008 64bit with SQL Server 2008 64bit and run from SAN storage arrays.
So the idea is that I have lots of servers, most of them not doing a huge amount with enough capacity to cope with peak loads and the failure of a ‘server’. I’ve architected and built systems like this before, so I know it works, the only new bit for me is VM. This year we don’t have the budget for a full on geographically disbursed HA system, JDE is merely one of a number of client facing critical systems none of which have a full HA solution and it’s outside the scope of this project.
The ESX arrays themselves will be dedicated to the JDE infrastructure; in Q3 I am proposing to cut over to the new server system which will liberate 16 servers or so which they can do what they like with.
As an organisation we will be inviting vendors in to size the system according to how many physical boxes we would need to support this and then ‘add some’. So the question I originally posted was to gauge from those who’ve already done it roughly what they got from OAS, so as to perform a sanity check, I don’t need a real accurate result or formulas, just a feel for it…
Thanks all