Windows Enterprise Server Sizing

eSky,

you said
"I personally believe RAID 5 is better than RAID 10 (1+0).
Raid 1+0 have more overhead during write operation. RAID 10 is very good for disaster recovery because every hard disk is mirror and then whole group is stripped on 1 more hard disk. Also for infra you need more twice storage on RAID 1+0 compare to RAID 5."

You're a little mixed up there. RAID 5 has more overhead for writes and is substantially slower at writes than RAID 10. RAID 5 in general practice is faster at reads. As you observed, RAID 5 is cheaper than RAID 10.

The thing is, with a OLTP system such as JDE, writes are happening all the time. As someone else observed, if the I-O subsystem is busy doing writes . . .

Also, those long running "reports" in JDE are probably using database tables as work/sort files. Which means they are probably spending most of their time on write operations . . .

So. How large are those F0911 and F0411 tables?
 
[ QUOTE ]

I'm not sure that table partitioning is supported. From the conversations that I have had on the subject, the big risk with table partitioning is down the road. When it comes time to do an upgrade, the Oracle scripts do not know what to do with a partioned table.


[/ QUOTE ]

You are right that partitioning can pose problems down the road with upgrades but then it should only be considered for very large tables. Case in point I know of a F0911 which has currently approx 140 million rows and is currently 300GB in size. For companies who have heavy SOP requirements the sales ledger can get very big quickly as well, again one example has approx 120 million rows is 250GB in size and is archived regularly. If I was to include the size of the archived tables it would be 3-4 times that size and you would probably want to upgrade those archived tables as well.

For large tables like this I think partitioning is the only practical option, especially if you archive the tables on a regular basis. This helps performance and eases maintenance on the tables and database in question but still poses a problem come upgrade time. But when you consider how slow the JDE table conversions can be, and take into account that you are likely to do the data part of the upgrade at least twice (probably more) then my preference is to manually script the table conversion and use the full power of the DB and hardware to speed up the process. It is a riskier option I know but one worth considering when you start comparing the day to day benefits of partitioning against the slightly riskier option of a manual table conversion every 3-5 years at upgrade time.

Just my 2 pennies worth.
 
[ QUOTE ]


How big is your tempdb?, how many physical files is it comprised of?, what are the specifications of the volume it is on? Do you have any other volumes on which you could place portions of tempdb?



[/ QUOTE ]

Jeff,

This is a architecture.

Disk Array of RAID 1 of 2 hard disk.
This Array consist of
1. Tempdta1.mdf - 5 GB (Auto growth enabled)
2. Tempdta2.mdf - 5 GB (Auto growth enabled)
3. Tempdta3.mdf - 5 GB (Auto growth enabled)

4. Templog.ldf - 5 GB (Auto growth enabled)


DISK ARRAY of RAID 5 consist of 8 Hard disk.
1. Jde_productiondta1.mdf - 100 Gb
2. Jde_productiondta2.mdf - 100 Gb
3. Jde_productiondta3.mdf - 100 Gb
4. Jde_productiondta4.mdf - 100 Gb
5. Jde_productiondta5.mdf - 100 Gb

6. Jde_productionlog.ldf - 10 Gb (auto extended)

All others will on separate array.


f0911 and f4111 is occupying approx 70 % of database size.
 
[ QUOTE ]
[ QUOTE ]


How big is your tempdb?, how many physical files is it comprised of?, what are the specifications of the volume it is on? Do you have any other volumes on which you could place portions of tempdb?



[/ QUOTE ]

Jeff,

This is a architecture.

Disk Array of RAID 1 of 2 hard disk.
This Array consist of
1. Tempdta1.mdf - 5 GB (Auto growth enabled)
2. Tempdta2.mdf - 5 GB (Auto growth enabled)
3. Tempdta3.mdf - 5 GB (Auto growth enabled)

4. Templog.ldf - 5 GB (Auto growth enabled)


DISK ARRAY of RAID 5 consist of 8 Hard disk.
1. Jde_productiondta1.mdf - 100 Gb
2. Jde_productiondta2.mdf - 100 Gb
3. Jde_productiondta3.mdf - 100 Gb
4. Jde_productiondta4.mdf - 100 Gb
5. Jde_productiondta5.mdf - 100 Gb

6. Jde_productionlog.ldf - 10 Gb (auto extended)

All others will on separate array.


f0911 and f4111 is occupying approx 70 % of database size.

[/ QUOTE ]

What size is the volume that tempdb is on? The reason that I ask is that, at first glance, your tempdb is not big enough. I realize that you have autogrow turned on but if you are using the proportional fill algorithm and your tempdb files are not pre-sized large enough, autogrow will mess up the proportional fill.

Assuming that you have instant file initialization turned on (and you should) there is no reason not to make those tempdb files pretty darn big - total size should be at least 40GB. Heck, it should be 40GB even if you don't have instant file initialization turned on. I can think of no reason, if IFI is enabled, not to make tempdb as large (within reason) as the volume can handle.

Run the script that I posted earlier during periods of high load when the problem UBE's are running. If you see PAGELATCH waits on tempdb, you've got contention for in-memory allocation bitmaps. If you see PAGEIOLATCH waits on tempdb, you've got contention at the I/O subsystem level.

No matter what, you need more space for tempdb.
 
There's lots of discussion in this thread about I/O separation and I/O isolation etc. All of these techniques can be helpful but aren't going cut it if the I/O system is simply under sized and starved from a potential IO's per seccond perspective.

To me (and I am no expert), 7 HDD's on a 400GB database is not enough. I realize that GB stored does not translate into IOPS but typically lots of IOPS do lead to large DB's. ;-)

My environment is AS400 so we're not apples to apples here, BUT, HDD technology is pretty much the same. 15K 3GB/s SAS drives are the same on Wintel or AS400.

I have a 70GB prod database (non unicode) with a 40GB F0911 (28 million rows). I have 18 x 15K RPM 3GB/s SAS drives with a 1.5GB battery backed RAID cache and my I/O system is still my most often over-utilized resource.

Years ago we upgraded from an older AS400 (eServer) 720 with 32 x 7.2K 9GB Ultra 2 SCSI hard drives to a much stronger iSeries 270 with only 10 x 10K RPM U320 SCSI and our system was grinding to a halt! Our drives were immediately running 100% busy with overall system response times in the multi-second range. The solution - the best RAID card with a huge cache we could find and 8 more HDD's (which maxed our that particular AS400 chassis).

In the world of database, a properly sized I/O system is essential for proper system performance.

Good luck.
 
Back
Top