SQL Server queries WITH (NOLOCK)

craig_welton

craig_welton

Legendary Poster
Hoping for some info from SQL Server heroes ...
64 Bit 2008 SQL Server. E1 9.0 8.98.4.2

UBE performance is lacking compared to the 8.10 install on SQL Server 2005. In some cases 3 to 5 times slower per batch program. DBA tracing has shown complex queries executing, waiting and then being sent again with the (NOLOCK) modifier. We understand this can be controlled through the SQLServer timeout settings in the server INI. Anyone have any advice or thoughts on propper settings? The 8.10 (SQL 2005) is set to 8 seconds timeout and 3 retries.

Is the existence of these timeouts indicative of a larger, systemic problem?

thanks,
Craig
 
[ QUOTE ]
Hoping for some info from SQL Server heroes ...
64 Bit 2008 SQL Server. E1 9.0 8.98.4.2

UBE performance is lacking compared to the 8.10 install on SQL Server 2005. In some cases 3 to 5 times slower per batch program. DBA tracing has shown complex queries executing, waiting and then being sent again with the (NOLOCK) modifier. We understand this can be controlled through the SQLServer timeout settings in the server INI. Anyone have any advice or thoughts on propper settings? The 8.10 (SQL 2005) is set to 8 seconds timeout and 3 retries.

Is the existence of these timeouts indicative of a larger, systemic problem?

thanks,
Craig

[/ QUOTE ]

The NOLOCK is a fallback, you need to find out why it is happening. Do a Wait analysis.
 
Hi,

Have you done an upgrade from SQL 2005 to SQL 2008?
If that's the case... have you dropped and regenerated
all indexes and statistics?
 
Thanks Jeff,

Not being a DBA, I'm assuming that's a long running query analysis? The UBEs are stock JDE integrities. The initial query results in hundreds of thousands rows and takes longer than the 8 second limit. The same thing happens in SQL 2005 (meaning the NOLOCK re-submit) but the entire UBE is faster. I know it could be code changes in 9.0, but the problems seem to be across the board (Cost Rollup, Work Order Processing etc.)

thanks,
Craig
 
[ QUOTE ]
Thanks Jeff,

Not being a DBA, I'm assuming that's a long running query analysis? The UBEs are stock JDE integrities. The initial query results in hundreds of thousands rows and takes longer than the 8 second limit. The same thing happens in SQL 2005 (meaning the NOLOCK re-submit) but the entire UBE is faster. I know it could be code changes in 9.0, but the problems seem to be across the board (Cost Rollup, Work Order Processing etc.)

thanks,
Craig

[/ QUOTE ]

Not necessarily a long running query analysis. There is a reason why the queries are running slow, these are indicated by WAITS - if something doesn't finish immediately it is waiting...on something. An analysis of what that 'something' is is what you need.
 
Craig,

Disclaimer: I am NOT a DBA

Concept: Could the issue of delay be Unicode?

8.10 - Non-Unicode
9.xx - Is-Unicode

The Data Selection in 8.10 is looking for something like:
Select * from F0911 where CO > '00666'

The Data Selection in 9.x is looking for something like:
Select * from F0911 where CO > ' 0 0 6 6 6'
- (even though it looks like the 8.10 query)

Technically - it is looking for value matching that is twice the size as that in 8.10? HOWEVER - the database is supposed to be optimized that it shouldn't matter (or, at least, the difference is inconsequential.

I would, just for kix, capture one of those long runners and have the DBA create a new index.

I realize this isn't much help - but, iterating a concept that is often missed.

(db)
 
[ QUOTE ]
Craig,

Disclaimer: I am NOT a DBA

Concept: Could the issue of delay be Unicode?

8.10 - Non-Unicode
9.xx - Is-Unicode

The Data Selection in 8.10 is looking for something like:
Select * from F0911 where CO > '00666'

The Data Selection in 9.x is looking for something like:
Select * from F0911 where CO > ' 0 0 6 6 6'
- (even though it looks like the 8.10 query)

Technically - it is looking for value matching that is twice the size as that in 8.10? HOWEVER - the database is supposed to be optimized that it shouldn't matter (or, at least, the difference is inconsequential.

I would, just for kix, capture one of those long runners and have the DBA create a new index.

I realize this isn't much help - but, iterating a concept that is often missed.

(db)

[/ QUOTE ]

Always love your answers Daniel but I have to disagree that creating a new index is among the first things to look at. I try not to muddy the waters with non-JDE indexes if I can possibly avoid it.
 
Thanks Jeff,

Creating the index is not the first thing I would try. Evaluating which index is used for Data Selection - is.

The suggestion to have a DBA create a manual index - would help to identify if a new index would be beneficial (I should have made that clear).

Many of the Database tools have matured to a point where the Database does a darn good job suggesting new indexes (and, what type of index). It can be important to include a DBA - because E1 can only create a simple index (Create index ... values ... over ...). A DBA can look at the advise of the database and create the 'correct' type of index.

The Database, will, then select the index that best fits the sql statement (whether that be the one that E1 created or that a DBA created).

In this case, the index creation is a validation. If it works, then create the index in FDA and blow away the index the DBA created (I like to keep things VERY Clean, too). If the E1 Index is slower than the DBA index - I would have the DBA re-create their index.

Pennies....

(db)
 
[ QUOTE ]
Thanks Jeff,

Creating the index is not the first thing I would try. Evaluating which index is used for Data Selection - is.

The suggestion to have a DBA create a manual index - would help to identify if a new index would be beneficial (I should have made that clear).

Many of the Database tools have matured to a point where the Database does a darn good job suggesting new indexes (and, what type of index). It can be important to include a DBA - because E1 can only create a simple index (Create index ... values ... over ...). A DBA can look at the advise of the database and create the 'correct' type of index.

The Database, will, then select the index that best fits the sql statement (whether that be the one that E1 created or that a DBA created).

In this case, the index creation is a validation. If it works, then create the index in FDA and blow away the index the DBA created (I like to keep things VERY Clean, too). If the E1 Index is slower than the DBA index - I would have the DBA re-create their index.

Pennies....

(db)

[/ QUOTE ]

Good point.
 
Thanks Daniel,

Yes, we're dealing with these on a case to case basis, but there must be something wrong on a grander scale. No index was required before, so why have to create one on a new stronger box with updated SQL software? That is the problem the client can't understand.

I am awaiting a response from the client about Sebastian's questions regarding rebuilding indexes and updating statistics.

thanks all, for your input.
 
Craig,

Did you get the NOLOCK thing resolved? I'm a a client using SQL Server, and I'm seeing a couple instances where the a query is fired, then 17 seconds later the same query with the 'NOLOCK' syntax.

Outside of E1, the queries complete instantly.

(db)
 
Hi Daniel,

The issue ended up being a Windows/BIOS setup at the particular client. Jeff detailed the events here :
2008 SQL server performance

The NOLOCK symptoms were happening everywhere in this case.

Craig
 
Craig / all,

I'm one to guess that the issues might be broader, but only identified in this one area, thus far.

Thanks for the blog post (thanks Jeff) - I've forwarded to the CNC for any sort of confirmation / review.

I'll update the thread with what we find.

(db)
 
[ QUOTE ]
Craig / all,

I'm one to guess that the issues might be broader, but only identified in this one area, thus far.

Thanks for the blog post (thanks Jeff) - I've forwarded to the CNC for any sort of confirmation / review.

I'll update the thread with what we find.

(db)

[/ QUOTE ]

In the JDBj.ini you have the parameter:

msSQLQueryAttempts=3

From the docs:

Maximum number of times a MS SQLSERVER query will be executed execute within the specified msSQLQueryTimeout period (see msSQLQueryTimeout details), then the last attempt of query execution will be done with the (NOLOCK, FASTFIRSTROW) syntax. This last attempt should always succeed in retrieving data even for deadlock scenarios.

The Retry attempts are made only when following SQL error conditions are detected:
1. TimeOut
2. Timeout expired
3. Serialization failure
4. Deadlock


https://supporthtml.oracle.com/epmos/faces/ui/km/SearchDocDisplay.jspx?_afrLoop=3409383711579000&type=DOCUMENT&id=660414.1&displayIndex=5&_afrWindowMode=0&_adf.ctrl-state=iucwbwy4j_57

E1 falling back to NOLOCK indicates an issue somewhere in SQL Server with locking, blocking, waits, etc. In essence, going to NOLOCK is exactly what E1 is supposed to do but it doing so is indicative of a SQL issue. I wouldn't exactly worry if it happens occasionally but if it gets to the point where you have to troubleshoot it, there's a problem.

Finding a locking/deadlock/blocking issue is tedious, have fun and email me if you need some help.
 
What drives me insane about this is that it is a GLOBAL param with no recourse (at least none that I know of) on a query by query basis during development. In other words, there are some queries that I know are going to take forever and I don't mind a dirty read and would simply like to issue the query with the NOLOCK qualifier to begin with. This global setting results in a LOT of uncessary table I/O and poor performance IMO since many queires end up getting subimtted multiple times simply because they took a long time to execute when first submitted.
 
So, the underlying question might be - "how long to wait" for a locked record. From a Developer's view - if it's locked why can't I immediately go to a Dirty Read? Why does the system have me wait 17 seconds before whatever process occurs and I get to do a dirty read, anyway?

Meaning, rather then wait - do the select, notify that it's locked and go immediately to the dirty read? Are we really looking for the values of the record at the time the statement was committed? Do I really care that someone else might be updating the records at that very specific blink in time?

(db)
 
[ QUOTE ]
What drives me insane about this is that it is a GLOBAL param with no recourse (at least none that I know of) on a query by query basis during development. In other words, there are some queries that I know are going to take forever and I don't mind a dirty read and would simply like to issue the query with the NOLOCK qualifier to begin with. This global setting results in a LOT of uncessary table I/O and poor performance IMO since many queires end up getting subimtted multiple times simply because they took a long time to execute when first submitted.

[/ QUOTE ]

Most developers see NOLOCK as a bit of a dangerous thing, to be used sparingly and only when you are sure that getting dirty data is okay. As a global parameter it is a bad method for solving blocking, essentially bypassing the key database ACID property of Isolation in order to address underlying performance issues.

I wonder what E1 SQL customers would do if they knew that they shouldn't trust the data being returned? Hmmmm, if there was someone with a blog that wrote technical articles....
 
[ QUOTE ]
Do I really care that someone else might be updating the records at that very specific blink in time?

(db)

[/ QUOTE ]

Yes you do care, not having Isolation violates the very concept of a database. Imagine this scenario:

We execute a transaction where $1B US in revenue is booked, while the transaction is processing (but not before it is committed) a P/L report is run. The page containing the financial data is locked but Scumbag NOLOCK comes in and reads it anyway, because that's what it does. The P/L shows inaccurate data as a result. Probability? Small. Impact? Large.

You might not care about reading data most of the time, probably 99.99% of the time...but you might, and setting a global parameter unless you are 100% sure is not a good idea.
 
My contention - that definition of 'dirty' is seventeen seconds of relativity. And, in today's world - that's an eternity
blush.gif
!

I do believe I've found the culprit for the issue I'm reviewing - and will have to review some more-current P4210/W4210A Code. F4211 FS BeginDoc is being called twice. The first time in the Set Grid Focus, and a second time during the Grid Changed Async. That second time is colliding with the original start of the transaction for Credit Memos. There are comments from Oracle that the BeginDocs in the Grid Changed Async are 'Temporary'. I just don't know if they have been removed in a more-current P4210 - gophigure.

Anyway - shouldn't that locking issue only come into play with Transactional Processing? Meaning; if a users tries to look up a record while it is in the middle of a transactional process, somewhere else? If so - then, logically, the row values are 'clean' until the transaction is committed (since the 'transaction' could be rolled back)?

E1 has a programmatic process for locking rows - I fluster at the data base process when the request is a Select (Read Only). If it was a Read or lock between multiple Updates, that could be a different situation and should be handled/considered by/for the Programmatic process.

I've lost all my pennies - so, I'll offer credit (just like any good politician)

(db)
 
[ QUOTE ]
My contention - that definition of 'dirty' is seventeen seconds of relativity. And, in today's world - that's an eternity
blush.gif
!

[/ QUOTE ]

I see your point exactly - if you're gonna read dirty data why not get right at it?

[ QUOTE ]
Anyway - shouldn't that locking issue only come into play with Transactional Processing? Meaning; if a users tries to look up a record while it is in the middle of a transactional process, somewhere else? If so - then, logically, the row values are 'clean' until the transaction is committed (since the 'transaction' could be rolled back)?

[/ QUOTE ]

A particular row value may be clean but not all row values. Ex: a process that updates all rows from 1 to 2. The process is half way thru when you do a NOLOCK SELECT and you get bad results.

[ QUOTE ]
E1 has a programmatic process for locking rows - I fluster at the data base process when the request is a Select (Read Only). If it was a Read or lock between multiple Updates, that could be a different situation and should be handled/considered by/for the Programmatic process.

[/ QUOTE ]

This. The decision to read inconsistent data should be made at the query level.


I've lost all my pennies - so, I'll offer credit (just like any good politician)

(db)
 
Back
Top