SQL Performance Difference (IBMi vs SQL Server)

chopper_ops

Member
Hello JDEList!

We are currently in the process of upgrading JDE as well as switching platforms from IBMi to SQL Server (details below). However, the average time to submit a SQL statement is "drastically" different between IBMi vs SQL Server. I tease on the word "drastically" because we are only talking about millisecond differences. Unfortunately, these milliseconds add up for jobs that submit millions of sql statements (ie, Financial jobs).

Take R007031 for example. Even though the underlying code between 9.1 and 9.2 is identical, the average time to run a sql statement is 0.2 milliseconds in IBMi and 3.6 milliseconds in SQL Server. I know it doesn't sound like much but if you extrapolate these times over a million records is becomes a large time difference (approximately 3 minutes for IBMi and 60 minutes for SQL Server).

Here are some details that may help point to the difference:

1. Database has been tuned for the SQL statement we have been using for testing
2. Statistics have been rebuilt
3. RCSI is enabled on SQL Server
4. Enterprise Server and Database are on separate boxes (were on the same box for IBMi)

Has anyone encountered anything similar? My thought was that RCSI or the Database being on a different box was probably the cause. However, this is purely a guess. Just looking for some things to try (Server Manager settings, etc) or verification that one or both of those things would account for the time difference.

Here are the specifics of each environment:

Current:
IBMi - V7R1M0
Tools - 9.1.4.5
Apps - 9.1
Oracle Weblogic Server 12.1.2.0.0


Moving toward:
SQL Server 2014
Tools - 9.2.0.5
Apps - 9.2
Oracle Weblogic Server 12.1.3.0.0


Sorry for the long post! Thanks for reading this far :).

Chop
 
You are not likely to achieve the 0.2 milliseconds SQL's in your current architecture. The network between the two servers likely has an RTT itself over more than 0.2 milliseconds. To measure the RTT between two severs in Windows you need an advanced ping tool like HRPING. The HRPING tool should tell you the base penalty for splitting the load between the two different boxes rather than running batch and database on the same system. There are also some other SQL specific tools you can use to "SQL Ping" between the two system to observe the latency further up and down the stack.

Also look at the SQL buffer cache hit ratio. It should be well over 99% with as cheap as RAM is these days. If it is less than that, then it implies you are faulting to the storage subsystem. With a 3.6ms response, it is unlikely you are actually incurring a physical spinning disk read, but could still be hitting the SAN cache or SSD storage. If that is the case, you could add more RAM or enable database compression to make more efficient use of the RAM you have.

The RCSI impact of the queries should be fairly easy to assess since enabling/disable RCSI is an online operation. You can run the UBE with it enable and again disabled to observe it specific impact on your operation. RCSI is typically something you want to have enabled since it solves other issues. I would be very cautious about not using RCSI even if it causes penalties like you may observe in this test.

The R007031 runs across date ranges. Do you have any options to restrict the date range for this UBE to say the last 90 days?

Hope some of this helped...
 
Take R007031 for example. Even though the underlying code between 9.1 and 9.2 is identical, the average time to run a sql statement is 0.2 milliseconds in IBMi and 3.6 milliseconds in SQL Server. I know it doesn't sound like much but if you extrapolate these times over a million records is becomes a large time difference (approximately 3 minutes for IBMi and 60 minutes for SQL Server).

How are you seeing the difference ? Are you turning on JDEDEBUG and looking at the timestamp in the logs, or are you looking at something else ?

In the architecture, are you running JDE Services (UBE) on a different box than the database in the SQL Server example ? If so, then the network latency would explain the difference.

Do you have antivirus running on the SQL Server box ? Try turning it off for the test. As well as any other "bloatware" that Windows might have.

Is the time consistent between tests ? ie, if you run R007031 multiple times, do the latter versions also take an average of 3.6ms ?

3.6ms is a big number in E1 terms. I agree that over the course of a transaction, thousands of SQL Statements would make this quite a performance drag.

If you can describe your architecture a little more, we might be able to identify something "obvious" - but there might be a time you need to get a specialist on the job...
 
Thanks for the info fellas! I appreciate the detailed responses. It gives us a few different things to review in order to get a better handle on the difference.

We just went live with 9.2 on Sunday, so the past few days have been dedicated to stabilization. All-in-all, it went fairly well. Once things calm down a bit, we'll start a more in-depth review using the suggestions you provided.

Again, thanks to Jeremy and Jon! I'll be sure to update if we find anything that could be helpful to the community.
 
Hello JDEList!

We are currently in the process of upgrading JDE as well as switching platforms from IBMi to SQL Server (details below). However, the average time to submit a SQL statement is "drastically" different between IBMi vs SQL Server. I tease on the word "drastically" because we are only talking about millisecond differences. Unfortunately, these milliseconds add up for jobs that submit millions of sql statements (ie, Financial jobs).

Take R007031 for example. Even though the underlying code between 9.1 and 9.2 is identical, the average time to run a sql statement is 0.2 milliseconds in IBMi and 3.6 milliseconds in SQL Server. I know it doesn't sound like much but if you extrapolate these times over a million records is becomes a large time difference (approximately 3 minutes for IBMi and 60 minutes for SQL Server).

Here are some details that may help point to the difference:

1. Database has been tuned for the SQL statement we have been using for testing
2. Statistics have been rebuilt
3. RCSI is enabled on SQL Server
4. Enterprise Server and Database are on separate boxes (were on the same box for IBMi)

Has anyone encountered anything similar? My thought was that RCSI or the Database being on a different box was probably the cause. However, this is purely a guess. Just looking for some things to try (Server Manager settings, etc) or verification that one or both of those things would account for the time difference.

Here are the specifics of each environment:

Current:
IBMi - V7R1M0
Tools - 9.1.4.5
Apps - 9.1
Oracle Weblogic Server 12.1.2.0.0


Moving toward:
SQL Server 2014
Tools - 9.2.0.5
Apps - 9.2
Oracle Weblogic Server 12.1.3.0.0


Sorry for the long post! Thanks for reading this far :).

Chop

Index fragmentation?

------SQL Code Begin

USE databasename

SELECT dbschemas.[name] as 'Schema',
dbtables.[name] as 'Table',
dbindexes.[name] as 'Index',
indexstats.avg_fragmentation_in_percent,
indexstats.page_count
FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL, NULL, NULL) AS indexstats
INNER JOIN sys.tables dbtables on dbtables.[object_id] = indexstats.[object_id]
INNER JOIN sys.schemas dbschemas on dbtables.[schema_id] = dbschemas.[schema_id]
INNER JOIN sys.indexes AS dbindexes ON dbindexes.[object_id] = indexstats.[object_id]
AND indexstats.index_id = dbindexes.index_id
WHERE indexstats.database_id = DB_ID()
AND dbindexes.[name] IS NOT NULL
--Can comment below to get all indexes
AND indexstats.page_count > 100
ORDER BY indexstats.avg_fragmentation_in_percent desc

------SQL Code End
 
Hello JDEList!

We are currently in the process of upgrading JDE as well as switching platforms from IBMi to SQL Server (details below). However, the average time to submit a SQL statement is "drastically" different between IBMi vs SQL Server. I tease on the word "drastically" because we are only talking about millisecond differences. Unfortunately, these milliseconds add up for jobs that submit millions of sql statements (ie, Financial jobs).

Take R007031 for example. Even though the underlying code between 9.1 and 9.2 is identical, the average time to run a sql statement is 0.2 milliseconds in IBMi and 3.6 milliseconds in SQL Server. I know it doesn't sound like much but if you extrapolate these times over a million records is becomes a large time difference (approximately 3 minutes for IBMi and 60 minutes for SQL Server).

Here are some details that may help point to the difference:

1. Database has been tuned for the SQL statement we have been using for testing
2. Statistics have been rebuilt
3. RCSI is enabled on SQL Server
4. Enterprise Server and Database are on separate boxes (were on the same box for IBMi)

Has anyone encountered anything similar? My thought was that RCSI or the Database being on a different box was probably the cause. However, this is purely a guess. Just looking for some things to try (Server Manager settings, etc) or verification that one or both of those things would account for the time difference.

Here are the specifics of each environment:

Current:
IBMi - V7R1M0
Tools - 9.1.4.5
Apps - 9.1
Oracle Weblogic Server 12.1.2.0.0


Moving toward:
SQL Server 2014
Tools - 9.2.0.5
Apps - 9.2
Oracle Weblogic Server 12.1.3.0.0


Sorry for the long post! Thanks for reading this far :).

Chop

Also take a look at Processor Power Management:

http://jeffstevenson.karamazovgroup.com/2011/06/windows-2008s-magic-performance-button.html
 
Thanks for the additional suggestions Jeff! We'll be sure to check 'em out!
 
Back
Top