Moving from Solaris to x86 RedHat

qaqqini

Active Member
We have been running our JDE system on Solaris servers, but are now looking to migrate our JDE system on to x86 architecture with toolset 8.98. Our current Solaris servers have proved to be very reliable and robust over the years, and we have a concern that x86 servers don't have the same level of robustness.
Just checking to see if anyone is running on x86 servers with RedHat and if there are any issues or patches that we need to know about so that we can prepare well in advance and avoid any last minute surprises.

thanks!
 
I'm not sure that your fears are really warranted.

If you're asking whether Linux is reliable vs Solaris - then I think that most people will agree that once configured, Linux is JUST as reliable as Solaris. Linux holds some of the longest uptimes of any servers on the internet (though FreeBSD is still the best !).

The same can be said for Windows as well, however. Once you configure an OS to run an application "perfectly" - providing you never update, modify or reconfigure the application or the OS, then it will theoretically run "forever" ! The issue is, of course, the constant "tampering" and "tinkering" we all do with OUR application - JDE !

On the other hand, if you're asking whether Intel/AMD based hardware is reliable vs Sun hardware - then it really depends on the server manufacturer. I've seen very, very good servers built over the years from all of the Intel manufacturers. Servers like the IBM Netfinity 7000 are literally pieces of iron that run forever (I just gave away a Netfinity 7000 M10 - it had been running for 10 years nonstop ! The new owner is a computer student - I believe that it'll run for another 10 years if he wants to use it that long !)

HP and Dell servers are also architected very robustly for the data center. When it comes to standard quad or octo core servers, traditionally used for JDE Database Servers, I've never really seen a server that wasn't designed to run on and on in the data center.

The amount of redundancy built into servers is important to understanding how reliable that server will be. My old Netfinity was designed with EVERYTHING redundant in mind - from triple power supplies through to the ability to have backup memory boards, redundant cooling etc. Of course, the Netfinity was a $60,000 machine as a BASE price back in the late 90's !

These days, the danger is that some of the server manufacturers aren't concentrating on the "internal" redundancy as much anymore, often trusting to "external" redundancy through the application - ie, through cluster technology like Oracle RAC.

However, when I architect a system - I perform my research based on the hardware vendor "of choice". I look for a server type that has good redundancy at a component level, and fits well into a data center design. I often look at the VMWare Acceptable Hardware list and base my decisions from those types of companies, after all, they have a LOT more at stake when they make recommendations !

I put together a design for a customer in 2004 that enabled the customer to migrate from an AS/400 to Windows NT and SQL Server. The customer opted for HP DL585 Servers (clustered SQL Server) with an HP EVA5000 Drive Array. Back in 2004, the cost was approximately $250,000 including the array, 2 database servers, 3 application servers and 10 citrix machines. The customer went live at the end of 2004, and have been running ever since. Zero downtime in 5 years. Certainly they've had some hardware failures - but with the correct architecture, they haven't experienced a single minute of unscheduled downtime in 5 years. And that is on Microsoft Windows !

Today, in my personal data center - I have a mixture of HP Proliant DL585's and Dell Poweredge 6500's. Of course, the average purchase price for my gear was under $100 per core because I bought off-lease from ebay ! However, if my servers run reliably in my closet at home, then the latest versions of those servers would probably run a LOT better in an aircooled datacenter ! I run VMWare ESXi 4.0 today - with approximately 36 virtual servers in my configuration, mostly running Oracle Linux (their version of Redhat). The only downtime I experience is when I make a change.

My most reliable server at home is a Dell Precision 670 dual core with 4Gb Memory, running FreeBSD 5.3 on a simple 3 Ultra320 SCSI RAID5 partition. It has been in service, continually, for more than 5 years. It took over from my Netfinity 7000 in 2005 when I needed a faster CPU. CPU Load Average has been 1.93 over those 5 years. Once a month, it automatically downloads the latest updates from FreeBSD, and recompiles its kernel and reboots. It sits in a closet, surrounded by other hardware, with an average temperature of 35 degrees C. I bought that machine for $500 when it came off a 3 year lease - its almost 10 years old now.

So, in summary, you shouldn't really fear what x86 servers are capable of. But, its important to purchase an architecture with some sort of internal redundancy and then back it up with an external architecture design with further redundancy. I try and imagine what would happen if someone tripped over a cable and unplugged something - what would it take for my architecture to go "down" !
 
Back
Top