Cardex with 100+Million Rows

Michelle D.

Active Member
Hi all,
As you can see from the title, our Cardex is quite large: over 100 million rows and growing rapidly every day. I'm familiar with the Item As-Of Generation and have ours up-to-date as of this Spring. However, we can't actually purge anything because we need the transactional details. In addition to needing usage data, etc in UBE's, we have several systems outside of JDE that require as much historical data as possible. I can't just purge the data into a new table and then use both the F4111 and the new purged F4111 table because it will still be just as slow if not slower. I've just moved the data around, not eliminated anything. The as-of generation is close, but it doesn't capture a low enough level. We need the DCT as well. I was thinking of copying that program and modifying it to write the summary record at the level of ITM, MCU, LOCN, LOTN, GLPT, DCT (that's the standard functionality plus DCT). And then perhaps we could purge it.

We can't possibly be the only company with a very large Cardex where the transactional records are actually still required by users, UBE's, and other outside systems. What have others done for a solution?

Thanks in advance for sharing!


Well Known Member
We use indexed SQL views to handle it. Also handles the UOM conversion back to primary.

Present that as a JDE table and write a UBE over that data. User never knows the difference.

And then turn off the standard ASOF from eating CPU time.


Well Known Member
Have you considered table partitioning?

The application is unaware, but it splits the database table into multiple physical files. I believe you may want to re-consolidate before upgrades, but it should give you a significant improvement to speed. You will want to check if that feature is available to your version of SQL server.


Well Known Member
For tables that size, you need to pull out all the stops for performance.

For MSSQL enterprise, You can
- increase the raw IO performance with partitioning like morglum666 stated.
- verify indexes and create new ones as needed
- compress the data
- compress the indexes
- if you use the SQL view ASOF method, create/compress those indexes
- look into writing a SQL stored proc to dynamically build the ASOF data and load a temp table for the report (SQL view still performs well for us so we haven't gone this deep)
- increase SQL memory/procs
- increase IO paths/data files to help subsystem
- etc

Just taking JDE on face value is a limit on performance. But, then you need to document and manage it properly or it will burn you.