The question:
As I understand it, when you define a column on a table you define its precision. This precision takes 1 byte and is stored at the column level. If you use a precision of 5 or more, then a DateTime2 column will take 8 bytes per row. (The precision is not stored at the row level.)
But when you convert that same DateTime2 as a VarBinary, it will take 9 bytes. That is because it needs the precision byte that is stored at the column level.
I am curious how this relates to when a DateTime2 is stored in memory. Say I have 1,000,000 DateTime2s in memory (each with a precision of 5 or more). Will that take up 8,000,000 bytes of memory, or 9,000,000 bytes of memory?
Basically, I would like to know if a default precision DateTime2 will cause more pressure on Page Life Expectancy than a normal DateTime?
The Solutions:
Below are the methods you can try. The first solution is probably the best. Try others if the first one doesn’t work. Senior developers aren’t just copying/pasting – they read the methods carefully & apply them wisely to each case.
Method 1
I am curious how this relates to when a DateTime2 is stored in memory.
In SQL Server data on disk is identical to data in memory*. Data Pages are copied from disk to memory and back, which would be expensive if the data was transformed when reading or flushing.
Say I have 1,000,000 DateTime2s in memory (each with a precision of 5 or more). Will that take up 8,000,000 bytes of memory, or 9,000,000 bytes of memory?
8,000,000 since storing a page in memory is completely unrelated to converting a DateTime2 to varbinary.
* Exceptions: In-Memory OLTP tables are an exception, and are substantially different in memory and on disk. And Transparent Database Encryption tables are decrypted as they are read into memory and encrypted as they are flushed to disk.
And as documented for datetime2:
Property Value Storage size1 6 bytes for precision less than 3.
7 bytes for precision 3 or 4.
All other precision require 8 bytes.21 Provided values are for uncompressed rowstore. Use of data compression or columnstore may alter storage size for each precision. Additionally, storage size on disk and in memory may differ. For example, datetime2 values always require 8 bytes in memory when batch mode is used.
2 When a datetime2 value is cast to a varbinary value, an additional byte is added to the varbinary value to store precision.
Method 2
A default precision DATETIME2 will not cause more pressure on PLE compared to DATETIME. The buffer pool consists of 8-KB pages. The page count is what matters as opposed to the internal storage workings of each page. It isn’t really correct to say that 1 million column values will take 8 million or 9 million bytes. Quoting from the documentation:
Buffer
In SQL Server, A buffer is an 8-KB page in memory, the same
size as a data or index page. Thus, the buffer cache is divided into
8-KB pages. A page remains in the buffer cache until the buffer
manager needs the buffer area to read in more data. Data is written
back to disk only if it is modified. These in-memory modified pages
are known as dirty pages. A page is clean when it is equivalent to its
database image on disk. Data in the buffer cache can be modified
multiple times before being written back to disk.Buffer pool
Also called buffer cache. The buffer pool is a global
resource shared by all databases for their cached data pages. The
maximum and minimum size of the buffer pool cache is determined during
startup or when the instance of SQL server is dynamically reconfigured
by using sp_configure. This size determines the maximum number of
pages that can be cached in the buffer pool at any time in the running
instance.
All methods was sourced from stackoverflow.com or stackexchange.com, is licensed under cc by-sa 2.5, cc by-sa 3.0 and cc by-sa 4.0