Sql Server & Raid Question

bwilkinson

bwilkinson

Well Known Member
Yes I know this is going to be a loaded question but I figured I would throw it out there.

I am in the process of swapping out my hardware to newer hardware. And the question has to do with configuration of the Raid Drives.

My old box was 2 banks of 6 drives for a total of 12 spindles.

The first 6 (raid 5) were partitioned for c: drive and F: drive
with c: being boot and f: being software. the second set of 6 (raid 5) contained the database. With a Total Available Space on all drives of 600 gb, we ran out of disk space so the DV and PY databases (data were not on line)

The new server will be 2 drives mirrored for boot (C:). 5 Drives set to raid 5 for the software (F:).

Now comes the biq question for the database I now have 14 drives available and the question becomes should I make it 1 set with 14 drives or break it into 2 sets and have say the some high use tables (Sales, Inventory) on 1 set and some high use tables (GL) on the other set?

This will give us a total of 3.4 TB of new disk space.

Thanks for the input.
 
Hi,

Here are two comments of mine :

1. JDE doesn't support SQL Filegroups; so you'll have
to manage them manually, everytime you generate a
table it will do it on the Default Filegroup, so you'll
have to move the table and indexes manually to the
other Filegroup.

This is not the case with Oracle and UDB, where JDE
lets you choose tablespaces on different locations.

The same happens with indexes, I'd love that JDE someday
decides to support SQL Filegroups, we could have a data
filegroup on one disk and an index filegroup on another,
but alas! it's not supported yet.

2. The more spindles (disks) you have, the better I/O
performance you get; so, I'd do the following with
your 14 disks :

10 disks for data (either RAID 0 or RAID 5 depending
on how large is your data and preferences)
2 disks RAID 1 for Transaction Log
2 disks RAID 1 for TempDb

Separating Transaction Log and Tempdb from main data
has a huge impact on database performance, and in my
opinion you'll get better results from that strategy
than from moving some tables away.

You're also right on moving your OS and JDE binaries
to other disk that DB, that will definitely help.

Finally, try to keep DV, PY and PS on a separate server
from PD; even if you have a few developers they can
submit requests or batches that may kill your server.
 
At one point I had the F09* tables In a seperate file group and yea I had to manage it by hand. Couple of years ago we had a DB crash and I combined that back together during the recovery of the DB and just never resplit them.

I like the idea of seperating the logs and temp db (currently on the old box they are on the same drive).

On other thing we did to improve thruput is that on the old box we had 1 SCSI card with 4 channels (used channel 1 & 2). on the new box we went with 2 SCSI cards with 4 channels so that Boot will be on card 1 channel 1, binaries will be on card 1 channel 2 and the database will be on a card 2 channels 1 ( 7 drives) & 2 ( 7 drives).

So I am thinking the Data DB will be on 1 channel and the logs and SQL DB's will be on the second channel.

We dont do a lot of dev its only me and 1 other person. We do some testing (when the Test DB's actually fit on the server). We do most of our custom reporting out of JDE so we have very little changes. So spliting the other env to another server just wont happen.

Thanks for the info.
 
Here is the cheat sheet as a DBA with 7 yrs of JDE experience ...

(1) Move all non-production environments (DV, PY etc) far away from the 10/12 disks that you have and also move their databases to another physical server so they do not share memory and other resources.

(2) Create 4 file groups (primary, big_tables_1, big_tables_2, big_tables_3) for the datafiles and create two transactional log files.

(3) In each file group create 4 files on seperate drive (LUN) and distribute all you big tables depending upon the usage between these three file groups leaving the small tables for the primary.

(4) Keep tempdb and transctional on least used drives/LUNS.

(5) Create say 8 drives / LUNS on the 10/12 disks you have. Do not split the disks into 6 + 4 or any other combionation. Let the SAN take care of the load.

Hope this helps.
 
A question and a comment:

1. On your SQL Server box why do you need so much space for software (5 spindles)?

2. From your signature you are Mfg company - therefore I assume you are using more than just financials in the software. A general rule of thumb for OLTP environments is that RAID 5 is a poor performance choice.

Cheers,
 
Its Actually 3 + 1 Dedicated hot spare for software.

You are correct that we use everything from Order Entry, Inventory Control, Mfg Acct etc. and it just how we always set our boxes up. we could always do a strip set with mirroring but that is a lot of disk.
 
Thought I would give an update to this thread since we did the upgrade on friday last week (3/7). It took us roughly three hours to move everything (DB'S and JDE Folder) to the new server. It then tooks us basicialy about an hour to reconfigure the box and get it up and running.

Final Config ending up like this (all drives 15k rpm).

Controler Card 1
Channel 1 Boot Drive (146gb, Raid 1, 1 drive)
Channel 2 Binaries (270gb, Raid 5, 3 drives)

Controler Card 2
Channel 1 Production DB (1.9TB, Raid 5, 8 drives)
Channel 1 Misc DB's (1.3TB Raid 5, 6 drives)

Performance increase has been pretty substantial and that's before we did any Index reorgs and Stat updates.

Restatment UBE went from 1 hour to 15 minutes.
Nightly MPS Regen UBE went from 1 hours 45 minutes to 20 minutes.

So far we are pretty happy with the results which is what counts in the end.

Some other things to note on performance is the old server and new server both had 8 GB of ram. The big differance is the old server was a Quad Dual core running 1.5 ghz. The new server is a single Quad core running at 3.16 with a bus speed of 1333.

Thanks everybody for your input.
 
Yes, the disk configuration is the most important thing on a DB server. As to the actual server speed, you can use:
http://www.everestsoftint.com/files/ESIBenchmark2008_v1_Setup.exe

to compare the old server's performance to the new one's. In fact, it would be very interesting to everyone on Wintel to see your benchmark results for this new server, if you could collect and post them...
 
Hello Sebastian,

I would disagree with you on splitting up the indexes into their own file group. This is actually the old strategy I used to use on our systems however was rebuked by Microsoft over that configuration. Basically, what they recommend as the best practice is splitting up your disks into three basic groups: data, logs, tempDB. They argue that whenever you begin using SQL file groups in most cases you lose ground. It's better to have as many spindles is possible underlying your data and indexes than trying to identify certain tables are indexes and then isolating them to their own disks.

That being said it is recommended that you create multiple files underneath primary. A good starting point is one file per COU core.

One other huge performance consideration disk is making sure to set the offset correctly. When the MBR gets written to the disk typically it doesn't fill up a full block. As a result you end up with a mismatch between your physical and logical blocks which can result in a read or write to one logical block resulting in physical disk operations. See Frank McBrath's blog for more information on the subject: http://databasediskperf.blogspot.com/
 
Yep....I saw Frank McBrath in Denver and talked to him then.

he cringed when I asked about separating indices onto their own raid set........old practice that now has negative effects.

He did however recommend the multitiple files (not multiple file groups).

Colin
 
Back
Top