E9.2 Long running F986110 query in R22

Soumen

Soumen

Reputable Poster
Dear List,

We have been noticing these long running queries on our new R22 (9.2.7.1) install coming from almost all Enterprise servers (windows). This the message in the queue kernel logs.
Anyone know why would the SELECT on F986110 be runing so long. We have not seen any other query running longer.
And we have a really fast SQL Server 2019 server.



14368/1204 Sun Mar 19 05:05:05.135000 jdbodbc.c8989
doTimeOutQueryDiagnostics: The following SQL query took 2 seconds which is equal to or greater than QueryExecutionTimeThreshold (2 seconds) for E1User(JDE) with DBProxyUser(JDE).

14368/1204 Sun Mar 19 05:05:05.136000 jdbodbc.c9018
SELECT JCJOBQUE, JCJOBPTY, JCJOBSTS, JCENHV, JCUSER, JCPSWD, JCJOBTYPE, JCSBMDATE, JCSBMTIME, JCJOBNBR, JCEXEHOST, JCORGHOST, JCPROCESSID, JCACTDATE, JCACTTIME, JCFUNO, JCPRTQ, JCFNDFUF1, JCFNDFUF2 FROM JDE920.SVM920.F986110 WHERE ( JCJOBQUE = 'QBATCH' AND JCJOBSTS = 'W' ) ORDER BY JCJOBNBR ASC


14368/1204 Sun Mar 19 05:10:27.150000 jdbodbc.c8989
doTimeOutQueryDiagnostics: The following SQL query took 2 seconds which is equal to or greater than QueryExecutionTimeThreshold (2 seconds) for E1User(JDE) with DBProxyUser(JDE).
 
Two thoughts on this:
- is Virtual Batch Queue enabled for QBATCH?
- have you tried optimizing the query via SSMS? Maybe an index helps you here?
 
- Yes Visrtual Batch Queue is enabled. Do you think that could be playing a role here?
- There are no index recommendation when I checked though Query analyzer. Also the query itself comes back in ms when run directly on the database.
 
- Yes Visrtual Batch Queue is enabled. Do you think that could be playing a role here?
- There are no index recommendation when I checked though Query analyzer. Also the query itself comes back in ms when run directly on the database.
Just an assumption, as you have no other idea what it might be and since it comes from user JDE i thought it might be that.
Also, what's the reason for QueryExecutionTimeThreshold being set to 2? I only know this from a debugging perspective and trying to find long running SQL statements, not as an always set value for a production environment.
 
I think there's a doc out there on it, but we've seen this on many clients - mostly where there's more than a single Ent server using a single server map. With SQL 2019, it did will offer an index, but it has the same field as one of the JDE created indexes already there The problem is SQL will stop using it due to the lack of cardinality (look at the statement, if you have a huge table, that may as well be everything so newer version of the db engine stop using the suggested index and does a full scan.) Now if you have, say 5 ent servers running 5 BIP kernels each, that poll query runs a ton.

Doc ID 2882039.1 is what to look at, and I think there's things you can do in SQL to force it to always use the JDE index.

I had a few clients take the I don't want to regen that index every 3 days approach and add the custom index on RJRPDEXHST,RJRPDJBSTS, RJRPDFUSS7 to some resolve.
 
Based on the log messages you provided, it looks like the SELECT statement on table F986110 is taking 2 seconds or more to execute, and it's causing the query to trigger a timeout diagnostic. There could be a number of reasons why this particular SELECT statement is taking a long time to execute. One possibility is that the table has grown very large over time, and the query is taking a long time to scan through all the rows. Another possibility is that the query is not optimized properly, and it's causing the database to do a lot of unnecessary work.
 
Dear List,

We have been noticing these long running queries on our new R22 (9.2.7.1) install coming from almost all Enterprise servers (windows). This the message in the queue kernel logs.
Anyone know why would the SELECT on F986110 be runing so long. We have not seen any other query running longer.
And we have a really fast SQL Server 2019 server.



14368/1204 Sun Mar 19 05:05:05.135000 jdbodbc.c8989
doTimeOutQueryDiagnostics: The following SQL query took 2 seconds which is equal to or greater than QueryExecutionTimeThreshold (2 seconds) for E1User(JDE) with DBProxyUser(JDE).

14368/1204 Sun Mar 19 05:05:05.136000 jdbodbc.c9018
SELECT JCJOBQUE, JCJOBPTY, JCJOBSTS, JCENHV, JCUSER, JCPSWD, JCJOBTYPE, JCSBMDATE, JCSBMTIME, JCJOBNBR, JCEXEHOST, JCORGHOST, JCPROCESSID, JCACTDATE, JCACTTIME, JCFUNO, JCPRTQ, JCFNDFUF1, JCFNDFUF2 FROM JDE920.SVM920.F986110 WHERE ( JCJOBQUE = 'QBATCH' AND JCJOBSTS = 'W' ) ORDER BY JCJOBNBR ASC


14368/1204 Sun Mar 19 05:10:27.150000 jdbodbc.c8989
doTimeOutQueryDiagnostics: The following SQL query took 2 seconds which is equal to or greater than QueryExecutionTimeThreshold (2 seconds) for E1User(JDE) with DBProxyUser(JDE).

How many records are in F986110?
 
Dear List,

We have been noticing these long running queries on our new R22 (9.2.7.1) install coming from almost all Enterprise servers (windows). This the message in the queue kernel logs.
Anyone know why would the SELECT on F986110 be runing so long. We have not seen any other query running longer.
And we have a really fast SQL Server 2019 server.



14368/1204 Sun Mar 19 05:05:05.135000 jdbodbc.c8989
doTimeOutQueryDiagnostics: The following SQL query took 2 seconds which is equal to or greater than QueryExecutionTimeThreshold (2 seconds) for E1User(JDE) with DBProxyUser(JDE).

14368/1204 Sun Mar 19 05:05:05.136000 jdbodbc.c9018
SELECT JCJOBQUE, JCJOBPTY, JCJOBSTS, JCENHV, JCUSER, JCPSWD, JCJOBTYPE, JCSBMDATE, JCSBMTIME, JCJOBNBR, JCEXEHOST, JCORGHOST, JCPROCESSID, JCACTDATE, JCACTTIME, JCFUNO, JCPRTQ, JCFNDFUF1, JCFNDFUF2 FROM JDE920.SVM920.F986110 WHERE ( JCJOBQUE = 'QBATCH' AND JCJOBSTS = 'W' ) ORDER BY JCJOBNBR ASC


14368/1204 Sun Mar 19 05:10:27.150000 jdbodbc.c8989
doTimeOutQueryDiagnostics: The following SQL query took 2 seconds which is equal to or greater than QueryExecutionTimeThreshold (2 seconds) for E1User(JDE) with DBProxyUser(JDE).
Also, what does index fragmentation look like on that table?

https://www.jdelist.com/community/threads/sql-server-index-fragmentation.57775/post-202439
 
We were able to resolve this by

- Purging the table and
- Adding one Index

Thanks Jeff. I will be using that query to keep an eye on the Index frangmentation.
 
Back
Top