NOLOCK has a reputation. Several reputations in fact. To the database snob, NOLOCK is a vestigial practice of the untrained masses, a Pavlovian reflex borne [yes, borne] of fossilized ignorance and apathy. Millions of man-hours of coding have gone into building robust ACID databases that guarantee consistency… to be casually thrown away by peons who can’t be bothered to actually fix their design.
For others, NOLOCK is the two-click salvation when the system is down, the company is losing thousands of dollars a minute, and the full weight of the corporate totem pole is pressing down. Incorrect results? Sure, in theory. Never seen one. NOLOCK works, so why not keep using it?
What about another reputation? That NOLOCK is the “go faster” hint? This one can be tested, and I love testing! There are three main ways it might work:
1) because it doesn’t get blocked
2) because it doesn’t have the overhead of taking locks
3) allocation order scans
Using NOLOCK to get around blocking is…I’m trying to think of a word here…dumb. I’m not talking about running an ad-hoc query on a busy production system where exact results don’t matter. I’m talking about adding the hint to EVERYTHING as a matter of practice. If your queries are slow from blocking, you need to fix how much data they’re reading, often through indexes, or move to RCSI. But others have written more on this.
Regarding allocation order scans, they help the most on severely fragmented tables – the ones with a GUID as clustering key for example. They also seem to help with physical reads. In fact, if you search for “NOLOCK performance testing,” the top result shows using the hint is faster, but the reason in that test is the different scan type. But when your data is in memory and not especially fragmented (you know, the way you’d expect when you’re not being bad at databases), the allocation scan doesn’t help as much.
So what about the second scenario? Ignoring blocking, ignoring allocation scans, does the locking overhead make a difference? Let’s find out!
Here’s my setup – four tables wide and narrow, max column and not.
--narrow
CREATE TABLE dbo.narrow (
ID INT PRIMARY KEY,
junk CHAR(1))
--wide table, no MAX
CREATE TABLE dbo.wide (
ID INT PRIMARY KEY,
junk CHAR(7000))
--narrow, MAX
CREATE TABLE dbo.narrow_max (
ID INT PRIMARY KEY,
junk char(1),
bigjunk VARCHAR(MAX))
--wide, MAX
CREATE TABLE dbo.wide_max (
ID INT PRIMARY KEY,
junk CHAR(7000),
bigjunk VARCHAR(MAX))
INSERT dbo.narrow
SELECT TOP 1000000 ROW_NUMBER() OVER(ORDER BY(SELECT 'Obbish')), 'a'
FROM master..spt_values a
CROSS JOIN master..spt_values b
INSERT dbo.wide
SELECT TOP 1000000 ROW_NUMBER() OVER(ORDER BY(SELECT 'Obbish')), 'a'
FROM master..spt_values a
CROSS JOIN master..spt_values b
INSERT dbo.narrow_max
SELECT TOP 1000000 ROW_NUMBER() OVER(ORDER BY(SELECT 'Obbish')), 'a', 'a'
FROM master..spt_values a
CROSS JOIN master..spt_values b
INSERT dbo.wide_max
SELECT TOP 1000000 ROW_NUMBER() OVER(ORDER BY(SELECT 'Obbish')), 'a','a'
FROM master..spt_values a
CROSS JOIN master..spt_values b
I then run the following SELECT statements that scan the tables. The WHERE clause makes sure no rows get returned and sent over the network. I use a warm cache because that’s a bit more realistic for most environments.
SELECT *
FROM dbo.narrow
WHERE junk = 'b'
ORDER BY ID
OPTION(MAXDOP 1)
SELECT *
FROM dbo.narrow WITH(NOLOCK)
WHERE junk = 'b'
ORDER BY ID
OPTION(MAXDOP 1)
SELECT *
FROM dbo.wide
WHERE junk = 'b'
ORDER BY ID
OPTION(MAXDOP 1)
SELECT *
FROM dbo.wide WITH(NOLOCK)
WHERE junk = 'b'
ORDER BY ID
OPTION(MAXDOP 1)
SELECT *
FROM dbo.narrow_max
WHERE junk = 'b'
ORDER BY ID
OPTION(MAXDOP 1)
SELECT *
FROM dbo.narrow_max WITH(NOLOCK)
WHERE junk = 'b'
ORDER BY ID
OPTION(MAXDOP 1)
SELECT *
FROM dbo.wide_max
WHERE junk = 'b'
ORDER BY ID
OPTION(MAXDOP 1)
SELECT *
FROM dbo.wide_max WITH(NOLOCK)
WHERE junk = 'b'
ORDER BY ID
OPTION(MAXDOP 1)
Average scan speeds (in ms) are as follow:
Table | Normal | NOLOCK |
---|---|---|
narrow | 51 | 53 |
wide | 794 | 525 |
narrow_max | 53 | 52 |
wide_max | 1433 | 1124 |
So yes, it looks like NOLOCK is faster, but only on wider tables in a scan situation. Seriously though, if your system is running a lot of scans on really wide tables, you might want to reconsider if speeding them up is your real priority.
What about an OLTP-style test, with a lot of single-row seeks? I used SQL Query Stresser with 40 threads of 10s WHILE loops and the below procs.
CREATE OR ALTER PROC dbo.seek_narrow
AS
BEGIN
SELECT ID
FROM dbo.narrow
WHERE ID = CONVERT(INT,RAND()*1000000)
END
GO
CREATE OR ALTER PROC dbo.seek_narrow_NOLOCK
AS
BEGIN
SELECT ID
FROM dbo.narrow WITH (NOLOCK)
WHERE ID = CONVERT(INT,RAND()*1000000)
END
GO
CREATE OR ALTER PROC dbo.seek_wide
AS
BEGIN
SELECT ID
FROM dbo.wide
WHERE ID = CONVERT(INT,RAND()*1000000)
END
GO
CREATE OR ALTER PROC dbo.seek_wide_NOLOCK
AS
BEGIN
SELECT ID
FROM dbo.wide WITH (NOLOCK)
WHERE ID = CONVERT(INT,RAND()*1000000)
END
GO
CREATE OR ALTER PROC dbo.seek_narrow_max
AS
BEGIN
SELECT ID
FROM dbo.narrow_max
WHERE ID = CONVERT(INT,RAND()*1000000)
END
GO
CREATE OR ALTER PROC dbo.seek_narrow_max_NOLOCK
AS
BEGIN
SELECT ID
FROM dbo.narrow_max WITH (NOLOCK)
WHERE ID = CONVERT(INT,RAND()*1000000)
END
GO
CREATE OR ALTER PROC dbo.seek_wide_max
AS
BEGIN
SELECT ID
FROM dbo.wide_max
WHERE ID = CONVERT(INT,RAND()*1000000)
END
GO
CREATE OR ALTER PROC dbo.seek_wide_max_NOLOCK
AS
BEGIN
SELECT ID
FROM dbo.wide_max WITH (NOLOCK)
WHERE ID = CONVERT(INT,RAND()*1000000)
END
GO
And here’s what I used to grab execution stats:
SELECT OBJECT_NAME(object_id) AS [procedure],execution_count,total_worker_time
FROM sys.dm_exec_procedure_stats
WHERE object_id IN (OBJECT_ID('dbo.seek_narrow'),OBJECT_ID('dbo.seek_narrow_NOLOCK'),OBJECT_ID('dbo.seek_wide'),
OBJECT_ID('dbo.seek_wide_NOLOCK'),OBJECT_ID('dbo.seek_narrow_max'),OBJECT_ID('dbo.seek_narrow_max_NOLOCK'),
OBJECT_ID('dbo.seek_wide_max'),OBJECT_ID('dbo.seek_wide_max_NOLOCK'))
ORDER BY [procedure]
And here are my results, measured in executions, averaged over three runs.
Table | Normal | NOLOCK |
---|---|---|
narrow | 11,059,073 | 10,530,684 |
wide | 10,770,979 | 10,838,683 |
narrow_max | 10,821,620 | 10,533,960 |
wide_max | 10,443,677 | 10,035,070 |
The runs had enough variance to them that I’m not confident in saying a particular method is a clear loser, but it should be obvious that NOLOCK is no help here.
And overall I think NOLOCK is a loser. Sure, it’s faster in scans on wide tables, but maybe you should fix the scans and the wide tables, instead of slapping on a bandaid. You know, you could TABLOCK hint for the same effect, without risking correctness.
There’s more I could test, and probably will. It turns out databases are complicated, or something. NOLOCK performance has long interested me – my only (as of this moment) dba.stackexchange question is about a case where NOLOCK is slower. In the end, though, I admit that I’m a database snob. Stop using NOLOCK, and fix your design!
I’d be interested to see how this works in earlier versions of SQL. Back in the early 2000s, when I was still a developer, my DBAs were adamant that NOLOCK be used in all SELECT queries for performance. I remember doing some testing of my own and seeing a marked difference. Whatever I documented has been gone for at least 15 years, so how can’t say just how marked. It would be interesting to see what happens in different versions, especially after hearing how many people are still on 2005 and 2008 R2.