JDE Archiving odd question

JMast

Reputable Poster
Hello All,

We are a small shop that has a very high transaction rate compared to our revenues. We are adding 400 MB to our Production database daily even though we are less than 50 million in sales. This means we don't easily have the budget for an ArcTools or Klik-It.... type archiving tool (especially since Larry requires such massive licensing fees). Suppose I use the JDE purge jobs on the F0911 now and 2 years from now we manage to get a proper archiving tool. Will the archiving tool correctly archive the A/R and A/P detail without the attached G/L records in the F0911?

Thanks,

Jer
Newbie to arcive/purging
 
[ QUOTE ]
Hello All,

We are a small shop that has a very high transaction rate compared to our revenues. We are adding 400 MB to our Production database daily even though we are less than 50 million in sales. This means we don't easily have the budget for an ArcTools or Klik-It.... type archiving tool (especially since Larry requires such massive licensing fees). Suppose I use the JDE purge jobs on the F0911 now and 2 years from now we manage to get a proper archiving tool. Will the archiving tool correctly archive the A/R and A/P detail without the attached G/L records in the F0911?

Thanks,

Jer
Newbie to arcive/purging

[/ QUOTE ]

Jer - to the best of my knowledge, the jde 911 purge job just goes out and whacks jobs without bothering to worry about minor little details like keeping ledgers in balance. Even the high end archiving tools have issues purging the 911 table. If your company can't afford a good arching tool and a good consultant to set it up and test, then you may be better off just investing in more disk space and living with an ever expanding 911. We purchased an expensive tool but did not purchase a good consultant to impliment it and our 911 purge attempt was a trainwreck.

- Gregg
 
[ QUOTE ]
Hello All,

We are a small shop that has a very high transaction rate compared to our revenues. We are adding 400 MB to our Production database daily even though we are less than 50 million in sales. This means we don't easily have the budget for an ArcTools or Klik-It.... type archiving tool (especially since Larry requires such massive licensing fees). Suppose I use the JDE purge jobs on the F0911 now and 2 years from now we manage to get a proper archiving tool. Will the archiving tool correctly archive the A/R and A/P detail without the attached G/L records in the F0911?

Thanks,

Jer
Newbie to arcive/purging

[/ QUOTE ]

Data compression is now available for certain editions of SQL 2008:

http://msdn.microsoft.com/en-us/library/dd894051%28SQL.100%29.aspx


BTW, what the heck does your organization do that it adds 400MB a day?
 
Greg,

Thanks for the reply. It is amazing the split personality that the JDE F0911 archiving job has. If you read the posts on the List, some people use it with no problems, others say it is a nightmare. You are correct in that it does not care about the Ledgers which is the root of my question. Until now, I hadn't heard that the archiving tools can struggle with it, so your tip of a consultant is great.

As for more disk, that may be what we have to do, but that creates its' own issues. Longer backup times, slower response for users...

Also, we are using an EMC fibre channel SAN which, I believe we have fully loaded, so it is not a matter of just adding disk.

Jer
 
Jeff,

I have heard some good things about data compression, we will have to look into that when we upgrade to 2008.

As for what we do to add 400 MB a day, we are a manufacturing company that uses almost all the modules, including MRP and forecasting. We just have a lot of transctions happening for a company our size. We have 90 users and most of them are adding transactions to the system.

We average 8,000 SO lines added daily which becomes 16,000 when they get copied to the custom reporting table we use for performance reasons. If you figure 4 lines in G/L per SO line we are adding 32,000 daily just for SO which does not include the couple hundred work orders processed daily.

The Sales Ledger also grows quickly. We purge Sales Ledger monthly, but 3 months is still over 6 million records.

Another factor is the growth of indexes from all the insertions. We get 3+ GB back (out of 251 GB currently) in space each month when we optimize the database.

The crazy part is the business is growing so the database is going to grow more quickly. For example, we are just launching an entirely new division which has huge potential.

At some point we are truly going to have to do something and I am doing what I can to stay ahead of it. That is why I appreciate you guys for giving me things to think about on options.

Jer
 
I think if you go to SQL Server 2008 and turn on compression you'll likely forget all about archiving.

You'll typically see your data cut by a massive amount and performance go through the roof.

This is going to be way cheapr than any commercial archiving tool and so much easier to implement.

Also adding a decent SAN is not very expensive and disk is so, so cheap that archiving won't really make sense until your around 1 TB.


Colin
 
[ QUOTE ]
Greg,

Thanks for the reply. It is amazing the split personality that the JDE F0911 archiving job has. If you read the posts on the List, some people use it with no problems, others say it is a nightmare. You are correct in that it does not care about the Ledgers which is the root of my question. Until now, I hadn't heard that the archiving tools can struggle with it, so your tip of a consultant is great.

[/ QUOTE ]

My archiving experience is with Optum (formerly Princeton, now IBM). They sold us on the idea that they had fully developed scripts that we could plug in and go. that was not the case. Our one attempt at at GL purge threw subledgers out of balance, reopened closed pojects, and just caused havock. I purged one weekend, and spent the following weekend restoring everything. From that perspective, the tool worked fine. It was the script that blew chunks. Sorry Princton guys, gotta be honest.

[ QUOTE ]


As for more disk, that may be what we have to do, but that creates its' own issues. Longer backup times, slower response for users...

Also, we are using an EMC fibre channel SAN which, I believe we have fully loaded, so it is not a matter of just adding disk.

Jer

[/ QUOTE ]

I liked Colin's ideas - focus on getting up to SQL 2008 and drop some money on a better SAN. We have a split personality on our SAN for JDE. The production database is running on a tier one SAN. It is very fast, highly redundant and disaster proof. The test and development databases are running on a tier two SAN with SATA drives. On a day to day basis, it keeps up. But when we do things like database refreshes and other high I/O operations, we see the test side of JDE slow down to a crawl. It's amazing the difference a fast SAN, optimized for a database, makes. That's the other factor. You can optimize SANs very differently for different applications. A SAN for a database should be set up with lots of spindles, high speed drives, and lots of cache. A SAN for file serving should be set up with really big drives. If you have a mixed bag on your SAN, then both database servers and file servers will run suboptimally.

For our 9.0 environment, we are using a SUN/Oracle Exadata database machine. It is four database servers, storage, and a switch all integrated together. It is highly optimized for one task and one task only - running an Oracle database. We expect to get much higher throughput on this machine because it is not a general purpose server like our HPs, and is not on a shared SAN.

- Gregg
 
Colin,

That sounds great. While I have your help, do you think we need to upgrade the server? In other words, what load does compression/decompression add to the system?

We are currently using a Dell PowerEdge 6650 with 4 Xeon 3Ghz processors and 12 GB RAM (10 allocated to SQL Server) as the database server. The server is not really worked hard during the day even though it is older. I don't have good metrics for during the batch run at night since we upgraded the RAM and to SQL Server 2005.

I would just hate to go to the trouble of upgrading and have a server that is overwhelmed, especially since we do not have a proper test environment for stress testing.

Maybe we should just stay put until we upgrade to 9.x in the next couple years.


Jer
 
[ QUOTE ]
Colin,

That sounds great. While I have your help, do you think we need to upgrade the server? In other words, what load does compression/decompression add to the system?

Jer

[/ QUOTE ]

Jer - yes - upgrade the server when you upgrade SQL. Two benefits. One is the hardware. New stuff is always bigger, better and faster. The second benefit is more significant. It is always very risky doing a major in-place upgrade of something as key as your database. If you have new hardware, you can get everything set, get SQL up and running, get a copy of the data up and running with compression. You can run some comparisions and dazzle your boss with the new speed, yada yada yada. Then when it is all set, export your data from the old box and import into the new one. Very low risk procedure, and MUCH less stressful upgrade process. That is our standard procedure for major upgrades.

- Gregg
 
Gregg,

Sorry, by the way, about losing a 'g' on your name, I just noticed the double.

I hear you on the SAN. I think there is a lot of work to do to optimize that situation. Actually, I look forward to the upgrade and new hardware for that reason. I wasn't involved when we put that thing in and it was such a massive nightmare process with EMC overall that the allocation of the spindles and what goes where was not well planned out. I am not an expert, but I have a good understanding of database architecture and what should go where. The guy that was here just didn't have any experience so EMC took the lead. They did a decent job (we have enough space allocated to the data files that we can grow for a while yet), but it could have been done better.

Jer
 
Compression will take more CPU and more memory but doesn't appear that you're using much.

How old is the server? What does the disk subsystem look like?


How many concurrent users do you have and what JDE modules are you running?

Colin
 
Gregg,

I hear you on upgrading the server. The trick is that means we wait until the 9.x upgrade which will have to happen in the next couple years (along with all the other slackers on Xe or 8.0!). I don't think it will fly to buy a new database server now and then again in a year or so. We also don't want to start the upgrade with a server that is already a couple years old.

So, based on what you guys have said, we should probably see if we can get by (maybe push the upgrade) without doing JDE archiving.

Thanks again,

Jer
 
Colin,

I would need to do some perfmons overnight to see what the batch jobs, MRP especially, are doing to the server.

The server itself is 4 or 5 years old. I am not a hardware guru, so I don't know what you are looking for in the disk subsystem. It is direct connected to the EMC SAN using HBA cards and fiber.

I would guess 75 or so concurrent (90 total) JDE users and maybe 15 or so RF users through RFSmart.

Jer
 
[ QUOTE ]
Jeff,

I have heard some good things about data compression, we will have to look into that when we upgrade to 2008.

As for what we do to add 400 MB a day, we are a manufacturing company that uses almost all the modules, including MRP and forecasting. We just have a lot of transctions happening for a company our size. We have 90 users and most of them are adding transactions to the system.

We average 8,000 SO lines added daily which becomes 16,000 when they get copied to the custom reporting table we use for performance reasons. If you figure 4 lines in G/L per SO line we are adding 32,000 daily just for SO which does not include the couple hundred work orders processed daily.

The Sales Ledger also grows quickly. We purge Sales Ledger monthly, but 3 months is still over 6 million records.

Another factor is the growth of indexes from all the insertions. We get 3+ GB back (out of 251 GB currently) in space each month when we optimize the database.

The crazy part is the business is growing so the database is going to grow more quickly. For example, we are just launching an entirely new division which has huge potential.

At some point we are truly going to have to do something and I am doing what I can to stay ahead of it. That is why I appreciate you guys for giving me things to think about on options.

Jer

[/ QUOTE ]

Colin is correct below when he states that compression is just delaying the inevitable. Use the compression if you need a short-term fix until you figure out a long term solution that includes more storage and data-related fixes.

I would have the business take a look at business processes and perhaps find ways to eliminate or consolidate orders.

As for indexes and optimizations I have a couple of thoughts:

- I would be doing index rebuilds as often as possible given that your massive record adds are fragmenting the indexes pretty badly. Take a look at this to determine fragmentation, I'm guessing it's pretty bad - http://jeffstevenson.karamazovgroup.com/2008/09/determine-index-fragmentation-in-all.html

- Page Splits must be through the roof if you are still using the default fill factor of 0. The bad thing about changing fill factors to something more in line with your needs is that it will increase the amount of storage space needed.

- Consider an reporting/offload server that is used for inquiries. RDBMS's do well as a read or write system and can function well if the split between read and write is not large. Yours is and merits a look at splitting the system somehow.

- Storage is pretty cheap. I think your organization is going to have to come to terms with the fact that their current business model and practices add a lot of data. Either change the practices or add storage. Cost of doing business with current practices.
 
[ QUOTE ]
Gregg,

I hear you on upgrading the server. The trick is that means we wait until the 9.x upgrade which will have to happen in the next couple years (along with all the other slackers on Xe or 8.0!). I don't think it will fly to buy a new database server now and then again in a year or so. We also don't want to start the upgrade with a server that is already a couple years old.

So, based on what you guys have said, we should probably see if we can get by (maybe push the upgrade) without doing JDE archiving.

Thanks again,

Jer

[/ QUOTE ]

I beg to differ, you probably need a new database server and SAN now. You can worry about the 9.0 upgrade later and there is no reason why you cannot use the new database server when that time comes.
 
[ QUOTE ]


The server itself is 4 or 5 years old . I am not a hardware guru, so I don't know what you are looking for in the disk subsystem. It is direct connected to the EMC SAN using HBA cards and fiber.


[/ QUOTE ]

You just justified the cost of upgrading your server. If your server is at least 4 years old, it is well past it's prime, and out of warrenty. We have a rolling policy of phasing out a server once it passes out of warrenty. Especially for something as critical as JDE. If you wanted to keep it around for a test box, that fine. But you are already at the point where your company is sitting at the tipping point ready for a major disruption because of an out-of warrenty server.

BTW - I'm a CNC, not a consultant or a vendor, so I don't have a horse in the race to sell you new stuff.

- Gregg
 
Jeff,

Unfortunately, as the vendor, we don't have the ability to make our customers do what is best for us! Also, our culture is one of extreme customer service which leads to some of the churn, but is significant in our business.

Unfortunately, I have been hit with a critical assignment and don't have time to provide detail on all your points. I will say this in summary:
- Performance is actually pretty decent given the situation
- "Disk is Cheap" means different things to different businesses. When you are a small business with an IT budget largely consumed by JDE/Oracle maint fees, upgrading an EMC SAN can be expensive.
- I hear your points and will look into them.


Thanks for the advice,

jer
 
Jeff,

I hear you. Unfortunately, a significant upgrade to the phone system is happening right now, so not likely to happen soon.
 
[ QUOTE ]
Jeff,

Unfortunately, as the vendor, we don't have the ability to make our customers do what is best for us! Also, our culture is one of extreme customer service which leads to some of the churn, but is significant in our business.

Unfortunately, I have been hit with a critical assignment and don't have time to provide detail on all your points. I will say this in summary:
- Performance is actually pretty decent given the situation
- "Disk is Cheap" means different things to different businesses. When you are a small business with an IT budget largely consumed by JDE/Oracle maint fees, upgrading an EMC SAN can be expensive.
- I hear your points and will look into them.


Thanks for the advice,

jer

[/ QUOTE ]

I know that advice like "spend more money" is not always welcome to the business people but the alternative - letting the system crash when it runs out of space is not easy. It must be communicated that this is the cost of doing business. It's like not budgeting for gas for a car and having record profits as a result.....until the car runs out of gas.

"...an IT budget largely consumed by JDE/Oracle maint fees..."

Business system licensing and maintenance fees should not be an IT budget item....just sayin'.
 
Gregg,

I know and my manager knows. We extended the warranty. I don't know all the numbers and the justifications since my manager handles them. He is conservative but spends when needed.

Jer
 
Back
Top