How are SQL Server Log Sequence Numbers (LSNs) generated?

If you’ve ever dug down in the SQL Server transaction logs or had to build up restore chains, then you’ll have come across Log Sequence Numbers (LSNs). Ever wondered why they’re so large, why they all look suspiciously the same, why don’t they start from 0 and just how does SQL Server generate these LSNs? Well, here we’re going to take a look at them

Below we’ll go through examples of how to look inside the current transaction log, and backed up transaction logs. This will involve using some DBCC commands and the undocumented fn_dblog and fn_dump_dblog function. The last 2 are very handy for digging into SQL Server internals, but be wary about running them on a production system without understanding what’s going on. They can leave filehandles and processes behind that can impact on your system.

So with that warning out of the way, let’s push on

Let’s start from the beginning, We’ll be using the DBCC LOG command here to peek into the transaction. First off let’s create 2 new databases to play with. If you’re following along, you need to run this block together. It’s not that it won’t run if you don’t, but comparing values get’s trickier as other Instance level operations may skew the results from each database making it harder to compare the LSNs

DBCC LOG (model)
CREATE DATABASE LsnPost1
DBCC LOG (LsnPost1)
CREATE DATABASE LsnPost2
DBCC LOG (LsnPost2)

Let’s have a look at the results, starting with model:

Current LSNOperationContextTransaction IDLogBlockGeneration
00000022:00000044:0001LOP_BEGIN_CKPTLCX_NULL0000:000000000
00000022:00000045:0001LOP_XACT_CKPTLCX_BOOT_PAGE_CKPT0000:000000000
00000022:00000046:0001LOP_END_CKPTLCX_NULLA0000:000000000

Notice that Current LSN isn’t a nice nuerical value here. We’ll come back to that later on. Now we’ll look at the output for the next 2 databases. First thing to notice is that they both have the same ‘amount’ of info returned (70 rows on my 2016 SQL Express instance). We don’t need all of that information just now, so we’ll just look at the first 8 rows for each:

LsnPost1

Current LSNOperationContextTransaction IDLogBlockGeneration
00000023:00000018:0001LOP_BEGIN_XACTLCX_NULL0000:000004100
00000023:00000018:0002LOP_MODIFY_COLUMNSLCX_CLUSTERED0000:000004100
00000023:00000019:0001LOP_MODIFY_COLUMNSLCX_CLUSTERED0000:000004100
00000023:0000001a:0001LOP_MODIFY_ROWLCX_BOOT_PAGE0000:000004100
00000023:0000001a:0002LOP_MODIFY_ROWLCX_BOOT_PAGE0000:000004100
00000023:0000001b:0001LOP_DELETE_ROWSLCX_MARK_AS_GHOST0000:000004100
00000023:0000001b:0002LOP_MODIFY_HEADERLCX_PFS0000:000000000
00000023:0000001b:0003LOP_SET_BITSLCX_PFS0000:000000000

LsnPost2

Current LSNOperationContextTransaction IDLogBlockGeneration
00000023:00000018:0001LOP_BEGIN_XACTLCX_NULL0000:000004100
00000023:00000019:0001LOP_MODIFY_COLUMNSLCX_CLUSTERED0000:000004100
00000023:00000019:0002LOP_MODIFY_COLUMNSLCX_CLUSTERED0000:000004100
00000023:0000001b:0001LOP_MODIFY_ROWLCX_BOOT_PAGE0000:000004100
00000023:0000001b:0002LOP_MODIFY_ROWLCX_BOOT_PAGE0000:000004100
00000023:0000001c:0001LOP_DELETE_ROWSLCX_MARK_AS_GHOST0000:000004100
00000023:0000001c:0002LOP_MODIFY_HEADERLCX_PFS0000:000000000
00000023:0000001c:0003LOP_SET_BITSLCX_PFS0000:000000000

Those 2 pieces of output look VERY similar, and they should do, as they’re the standard operations SQL Server performs when it create a new database as a clone from Model.

This explains why LSNs will look similar across all databases created on the same instance, they are all seeded with much the same values at creation time. If the databases workload are vaguely similar, then they’ll increase in a very roughly similar way. Note that if you move a database across instances it will continue with it’s LSN values, and not pick it up from the new instance.

So, back to that strange looking Current LSN value. Starting from the left of the Current LSN we have 00000022 for Model and 00000023 for our Test databases. The fact that they’re sequential is interesting, and using DBCC LOGINFO we can see that it’s the File Sequence Number, just represented in hexadecimal (0x22 = 34, 0x23 = 35):

DBCC LOGINFO (model)
RecoveryUnitIdFileIdFileSizeStartOffsetFSeqNoStatusParityCreateLSN
0225395281923421280
DBCC LOGINFO (LsnPost1)
RecoveryUnitIdFileIdFileSizeStartOffsetFSeqNoStatusParityCreateLSN
0220316168192352640

Now, this being computer related, it’s not quite that simple! Let’s take a look at a database that’s been around a while:

DBCC LOG (mscrm_config)
Current LSNOperationContextTransaction IDLogBlockGeneration
00015e2f:000002b8:0002LOP_BEGIN_CKPTLCX_NULL0000:000000000
00015e2f:000002b8:0003LOP_COUNT_DELTALCX_CLUSTERED0000:000000000
00015e2f:000002b8:0004LOP_COUNT_DELTALCX_CLUSTERED0000:000000000
00015e2f:000002b8:0005LOP_COUNT_DELTALCX_CLUSTERED0000:000000000
00015e2f:000002b8:0006LOP_COUNT_DELTALCX_CLUSTERED0000:000000000
DBCC LOGINFO (mscrm_config)
RecoveryUnitIdFileIdFileSizeStartOffsetFSeqNoStatusParityCreateLSN
0238666241560412168964726457548000000349600505

The current LSN starts with 00015e2f which if we treat as 0x00015e2f then we get 89647, the curent File Sequence Number.

Now to get the ‘official’ numeric versions of the LSN, we’ll just drop out a backup and read back the headers. I’m taking 2 log backups, as the first 1 on the new database will contain some over

ALTER DATABASE lsnpost1 SET RECOVERY FULL;
BACKUP DATABASE lsnpost1 TO DISK = 'c:\blogpost\lsn1.bak';
BACKUP LOG lsnpost1 TO DISK = 'c:\blogpost\lsn1.trn';
BACKUP LOG lsnpost1 TO DISK = 'c:\blogpost\lsn1_2.trn';
RESTORE HEADERONLY FROM DISK='c:\blogpost\lsn1_2.trn';
FirstLSNLastLSNCheckpointLSNDatabaseBackupLSN
35000000009400001350000000112000013500000000960000235000000005800065

And now the undocumented function fn_dump_dblog is going to come in to play. This lets us peak at the contents of a log backup in the same what that fn_dblog lets us look into the live transaction log. We’re going to lock for the 2 checkpoints that booked the log backup. This is a little bit contrived as during a real prod backup you may have checkpoints occurring while the backup is running

SELECT TOP 1 * FROM
fn_dump_dblog (
NULL, NULL, N'DISK', 1, N'c:\blogpost\lsn1_2.trn',
DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT);

The abridged results should look somewhat like this:

Current LSNOperationContext
00000023:0000005e:0001LOP_BEGIN_XACTLCX_NULL

So we know that 0x23 = 35, so that’s that’s the first 2 digits of FirstLSN decoded

The next fragment is 0000005e, which is obviously another hexadecimal value, which is 94 decimal. That appears after a string of 0’s. How many 0s though? Well, as it’s a hex value of 8 digits, the highest possible value is 0xffffffff which is 4294967295 decimal with 10 digits. Our value is 2 digits and 8 0s, 10 digits

And the final fragment is 0001. This is hex again, but this one I can do without a conversion utility and it’s 1 decmial. This is padded with 4 0s (largest hex value for that fragment is 0xffff, which is 65535 decimal, 5 digits total)

So that’s how we convert these values but what do they mean? To understand the second fragment we need to know that SQL Server first writes all log records to it’s Log Buffer before flushing them to disk in multiple 512k writes (https://blogs.msdn.microsoft.com/sql_pfe_blog/2013/03/28/observing-sql-server-transaction-log-flush-sizes-using-extended-events-and-process-monitor/). The Log Buffer is flushed to disk on one of these occasions:

A Transaction is committed

A transaction is committed once the record of it’s changes (the log) has been hardened to disk, so the Log Buffer must be flushed to disk. This flushes the whole Log Buffer to disk, not just the blocks involved in the Committing transaction

The Log Buffer is full

The Log Buffer is 60k in size (https://blogs.msdn.microsoft.com/sql_pfe_blog/2013/03/28/observing-sql-server-transaction-log-flush-sizes-using-extended-events-and-process-monitor/), once it’s full it has to be flushed to disk.

A checkpoint occurs

A checkpoint causes SQL Server to write all dirty pages and Buffer Log records to disk.

If you look at the full output of

SELECT * FROM
fn_dump_dblog (
NULL, NULL, N'DISK', 1, N'c:\blogpost\lsn1_2.trn',
DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT);

You’ll see that the second fragment advances after an operations such as LOP_COMMIT_XACT or LOP_BEGIN_CKPT. This is because the Log Buffer was flushed to disk by one of theses actions. You’ll also notice that the fragment doesn’t increment nicely by 1. So with my demo database I go from “00000023:0000005e” to “00000023:00000060” rather than increment by 1 to “00000023:0000005f”, and later on I jump from “00000023:00000060” to “00000023:0000006f”, an increment of 15

The reason for these jumps is the amount of data that is flushed from the Log Buffer. This can be calculated by adding up the Log Record Length column. So for my large jump above I can do:

select *  from
fn_dump_dblog (
NULL, NULL, N'DISK', 1, N'c:\dbatools\lsn1_2.trn',
DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT)
where [current lsn] like '00000023:00000060%'

which gives me 7088. So divide by 512 and I get 13.84, as SQL Server only write whole block, that’s 14 512 byte blocks written out. So the jump from 0x60 to 0x6f (15) is the number of 512 byte blocks written out from the Log Buffer, in this case by the log backup finishing. SO the biggest jump in this value you should see is 120 (0x78) as 120*512b = 60kb.

The final fragment then is just a simple counter of the number of records in that ‘block’ of Log Buffer

This is one of the main reason why LSNs are not a nice continuous number line you can do meaningful arithmetic on. You can go from 00000023:00000060:0002 (35000000009600002) to 00000023:00000061:0001 (35000000009700001), a jump of 99999 ‘LSNs’ with a single checkpoint call, so without analysing the actual work done in the log you can’t tell if those LSNs just ‘jumped’ or were actually consumed doing a large insert. So, as the documentation says, they’re only good for comparison purposes. All a larger LSN tells you is that the higher numbered action occurred after the lower numbered on. There is no meaning to the gap between them other than a count of the number of FileSequence and Log Buffer flushes between them.

Leave a Reply

Your email address will not be published. Required fields are marked *