Last week I wrote a quick intro to SQL Server's In-Memory OLTP engine. This week's blog post will take a slightly more in depth look at our previous examples to show how performance increases aren't quite as simple as they appear. We'll look at the why and then explore how we can get the improvements we expect from in memory database objects.
I have to admit, last week's example only shows part of the story. In fact, it was performed under a not realistic scenario to show greater performance improvements on memory optimized objects. I only feel slightly bad because the example was pulled from Microsoft's own demonstrations. A quick summary of the test that was performed. Two identical tables were created. One was a traditional disk based table while the other was a memory optimized table. Scripts were set up to test loading 1 million records into each table. Our disk based table took 15000 ms to load while our memory optimized table took 6000 ms. On the surface, this looks fantastic. But what happens if we run the scripts again?
Wait a second…The disk based load took…6976 ms to load. Almost as fast as our memory optimized table load. Without going into so much detail, when SQL Server first encounters a query, it has to generate an efficient query plan to service the request. Once the plan is generated it is stored in the plan cache for reuse the next time the query is run. This is a big performance boost for frequently executed queries. As we saw, SQL Server was able to give our disk based table load a huge boost by pulling its query plan from the cache. So what gives?
With In-Memory OLTP, Microsoft brings us the concept of Native Compilation, the process of converting programming constructs into machine code. When a memory optimized table is created, its table definition is loaded into database metadata, the table and indexes are created in memory and the table is compiled into a DLL file. A few tidbits of these DLL files. These files, while associated with the database, are part of the database. They are stored on the file system and SQL server maintains a lock on these files to prevent any accidental modifications. They are compiled at creation time or at server start up and are dropped when the server is shutdown. Additionally, these files are dropped when their corresponding database objects are dropped or the database itself is dropped. That being said, these files are not backed up during a backup process and require no maintenance on the administrators part.
More than one way to skin a cat.
As we saw, we can easily execute queries against memory optimized tables in the same fashion as standard disk based tables. The nice thing about this is that our queries continue to function properly as well as get a minor speed boost. With the exception of a handful of restricted constructs, everything continues to work as it would. When accessing memory optimized tables in this fashion, be it Ad-hoc queries or standard stored procedures, it is referred to as interpreted T-SQL. Interpreted TSQL is only one way to access memory optimized tables. The other way, as you can guess at this point, is Natively Compiled Store Procedures.
Natively Compiled Stored Procedures are compiled at creation time as opposed to interpreted stored procedure that are compiled at first run time. By compiling the store procedure into machine code, SQL Server is able to access memory optimized tables as fast as possible. Let's checkout an example. Below, we've modified our Interpreted TSQL into a natively compiled stored procedure.
And let's see how this performs compared to its Interpreted counterparts.
Whew 2879 ms! That's more like it. Only a fraction of the time the Interpreted TSQL took. These are the performance increases that we really want from In Memory objects. So, when should we use Interpreted TSQL and when should we use Natively Compiled Stored Procedures? Microsoft recommends using natively compiled stored procedures for all critical workloads. It's simply the fastest way to access memory optimized tables. On the other side, Microsoft recommends Interpreted TSQL for adhoc queries, administrative tasks and any time an unsupported construct is absolutely required.
This was just a quick introduction to Native Compilation. Stay tuned for a more in-depth look into this new offering from Microsoft.