exploring in-memory oltp - sec.ch9.ms · exploring in-memory oltp 3 estimated time to complete lab...
TRANSCRIPT
Exploring
In-Memory
OLTP
Exploring In-Memory OLTP
Contents
See the Impact of In-Memory Features ...................... 3
Faster Way to Process Data ............................................ 8
Creating Memory-Optimized Data and Filegroups .. 9
Creating Memory-Optimized Tables and Indexes ...15
Natively Compiled Stored Procedures to Access
Memory-Optimized Data ...............................................18
Migrating to In-Memory OLTP .................................... 28
Analysis, Migrate and Report ....................................... 28
Memory Usage by Memory-Optimized Object
Report ................................................................................ 42
Strategies to Gain Performance.................................. 52
Application-Level Partitioning ...................................... 53
Terms of use ..................................................................... 57
Exploring In-Memory OLTP 3
Estimated time to complete lab is 45 minutes
Connect to SQLONE computer
1. Click on SQLONE button on right side of the screen to connect
to the SQLONE computer. If you see the following in the lower
right corner of the screen, you can jump to step 5 below to set
your screen resolution.
2. Click Send Ctrl-Alt-Del for SQLONE computer and then click
Switch user.
3. Click Send Ctrl-Alt-Del for SQLONE computer again and then
click Other user.
4. Log on to SQLONE computer as labuser with password
pass@word1
Note, if you have a monitor that supports a larger screen resolution than
1024 x 768, you can change the screen resolution for the lab to go as
high as 1920 x 1080. By going to a higher screen resolution, it will be
easier to use SQL Server Management Studio.
5. Right click on the desktop and click on Screen resolution.
6. Select 1366 x 786 (a good minimum screen size for using
SSMS) and click OK.
7. Click Keep Changes.
8. Resize the client holLaunchPad Online window for the lab to
fit your screen resolution.
See the Impact of In-Memory Features
Exploring In-Memory OLTP 4
Load data into buffer
These steps should be run shortly before starting the experience to
warm up the applications and so get representative performance.
9.1. Run E1A_warmup from the SQLONE desktop. This will take 3-5
minutes to run.
2. Once completed, close the application.
3. Run E1B-HotList from the SQLONE desktop. This may take up
to a minute to open and two minutes to run if this report has
not been run recently.
10.
4. To exit the application click close
Exploring In-Memory OLTP 5
First, you will want to see current performance of the sales processing
system
1. From the Desktop, double-click on E1A-SalesProcessing which
opens the Performance Effects of OLTP In-Memory Features
program
2. Wait 5 seconds for the program to load.
3. Watch how the purchase rate unfolds over time. This is showing
current performance of the sales processing system without any
of the new features.
Next, you will implement In-Memory Tables. This features stores the
database tables in memory, which is faster to read and write to, rather
than on disk, which is the traditional-but-slower place to store tables.
4. In the Performance Effects of OLTP In-Memory Features
program, click Implement In-Memory Tables
Exploring In-Memory OLTP 6
This runs a SQL script that was written which alters the purchase stored
procedure to use pre-existing in-memory tables
5. Wait while the in-memory table feature is implemented (5
seconds) then watch how the performance unfolds using in-
memory tables.
NOTE: performance will vary over time depending on the amount on
memory used by other processes, so you may have to wait some seconds
to see a performance increase.
Finally, you will implement native compilation of stored procedures. This
features allows the computer to pre-compile the stored procedure (in
your case, the piece of SQL which is run every time a customer makes a
purchase) so it is already in the most efficient form to be run before it
actually has to run the procedure. Native compilation can markedly
improve performance, particularly on procedures which are run often
with complicated logic but few whole-table scans.
6. In the Performance Effects of OLTP In-Memory Features
program, click Implement Native Compilation
Again, this runs an SQL script written which alters the purchase stored
procedure to be natively compiled.
7. Wait while the program loads the new feature (5 seconds) then
watch how performance unfolds over time with native
compilation and in-memory tables implemented.
Exploring In-Memory OLTP 7
NOTE: as with previous feature implementation, performance will vary
depending on the amount of memory used by other processes, so you
may need to wait some time to see the average performance.
8. To exit the application click close
You now see the performance benefits of using In-Memory OLTP
feature on a sales processing process.
Establishing your In-Memory OLTP Solution by Creating
Filegroups, Tables and Native Procedures
Today, business is demanding smarter innovations, faster business
processes and lightning-fast query performance for their reports. With
memory becoming much cheaper, organizations can afford to add
large amounts of RAM to their servers (up to 4TB for 64 bit processors).
One of the most impressive new features of SQL Server 2014 is the in-
memory online transaction processing (OLTP) engine; a high-
performance database technology that can revolutionize business
processes.
Using In-Memory OLTP you can create a table to be stored in main
memory (RAM) which is available to your OLTP workload for faster
data access. This type of table is called memory-optimized table. The
most significant difference between memory-optimized tables and
disk-based tables is that pages do not need to be read into cache from
disk when the memory-optimized tables, and the metadata about
memory optimized tables, are stored in the SQL Server catalog. Since
pages are not involved for storage purposes, there is no page latch
leading to locking and blocking free processing. Using main memory
also results in performance improvement, in some instances up to 20
times improvement
Faster Way to Process Data Formatted: Font: Italic
Formatted: Font: Bold
Exploring In-Memory OLTP 8
The hands-on-lab system has SQL Server 2014 on the new server with a
suitable configuration. Having done thisWith this server, you need to
create a new Contoso database to hold new in-memory tables. Any
database that will contain memory-optimized tables needs to have at
least one MEMORY_OPTIMIZED_DATA filegroup. These filegroups are
used for storing the data and delta file pairs needed by SQL Server to
recover the memory-optimized tables. You can only create one
memory-optimized filegroup per database. You need to explicitly mark
the filegroup as containing MEMORY_OPTIMIZED_DATA. You will now
create the in-memory database.
Creating a database with in-memory filegroups:
1. Open SQL Server 2014 Management Studio from the
Windows Start screen, enter Database Engine… as the server
type, SQLONE as the Server name and ensure Windows
Authentication is selected before clicking Connect.)
2. From the File menu, select Open and File…
3. Browse to open C:\SQLSCRIPTS\E2\E2E-1#1 Create Database
with in-memory filegroup.sql file to create an in-memory
database with filegroup
4. Click Open
Creating Memory-Optimized Data and Filegroups
Exploring In-Memory OLTP 9
It is always important to put the log files on a drive that has low latency,
such that transactions do not need to wait too long, and to prevent
contention on log IO. You need to ensure that, when running In-Memory
OLTP, your log IO device does not become a bottleneck. It is
recommended to use a storage device with low latency, at least SSD. For
this experience we don’t have SSD on SQLONE but users can run this
scenario on a machine with SSD and SQL Server 2014 installed and
achieve even better performance by saving logs and file groups on SSD
drive
5. Click on Execute
Note the “Command(s) completed successfully.” message appears in the
Messages tab
6. In the Object Explorer pane, right-click on Databases and
select Refresh
Notice the new database we created called ContosoInMemory you
may have to expand the databases to see this
7. Right click on the ContosoInMemory database and select
Properties
Exploring In-Memory OLTP 10
8. Click on Filegroups section
Verify that the newly created filegroup “ContosoInMemory_mod”
appears in the MEMORY OPTIMIZED DATA section
9. Click on OK to close the properties dialog box
Alter existing database to add in-memory filegroup:
1. In the SQL Server 2014 Management Studio, in File menu,
select Open and File…
2. Browse to open C:\SQLSCRIPTS\E2\ E2E-1#2-Create a
database to alter.sql file
Exploring In-Memory OLTP 11
3. Click on Execute
NOTE: This script may take approximately 3 minutes to execute
4. In the Object Explorer pane, right click on Databases and
select Refresh
Exploring In-Memory OLTP 12
A new database with the name ContosoRetailDWInMemory should
now appear in the list of databases in the Object Explorer pane
To alter this database (or any of your existing database) to add in-
memory filegroup, follow the steps below
5. In SQL Server 2014 Management Studio , open the File
menu, select Open and File…
6. Browse to open C:\SQLSCRIPTS\E2\E2E-1#3-Alter existing
database to add in-memory filegroup.sql file
7. Click Open
Exploring In-Memory OLTP 13
You are adding in-memory filegroup to ContosoRetailDWInMemory
database in this script
8. Click on Execute
NOTE: The “Command(s) completed successfully.” message appears in
the Messages tab
9. Right click on Databases and select Refresh
10. In the Object Explorer pane expand the Databases node
11. Right click on ContosoRetailDWInMemory database and
select Properties
12. Click on the Filegroups tab
Exploring In-Memory OLTP 14
Notice the newly created filegroup ContosoRetailDWInMemory_mod
appears in the MEMORY OPTIMIZED DATA section
13. Click on OK to close the properties dialog box
Having been able to successfully run the script, the environment is
ready for creating memory optimized tables.
You will now create the memory optimized tables and indexes to store
the data in the optimal way. The syntax for creating memory-
optimized tables is almost identical to the syntax for creating disk-
based tables, with a few restrictions, as well as a few required
extensions. Specifying that the table is a memory-optimized table is
done using the MEMORY_OPTIMIZED = ON clause
A memory-optimized table can be defined with one of two
DURABILITY values: SCHEMA_AND_DATA or SCHEMA_ONLY with the
former being the default. A memory-optimized table defined with
DURABILITY = SCHEMA_ONLY, means that changes to the table’s data
are not logged and the data in the table is not persisted on disk.
However, the schema is persisted as part of the database metadata, so
empty table will be available after the database is recovered during a
SQL Server restart
Creating Memory-Optimized Tables and Indexes
Exploring In-Memory OLTP 15
To create the indexes while creating the tables, you must make the
PRIMARY KEY index a HASH index for which you will specify a bucket
count along with other NONCLUSTERD indexes.
Creating and inserting data into memory optimized tables:
1. From the File menu, select Open and File…
2. Browse to open C:\SQLSCRIPTS\E2\E2E-2#1 Create memory-
optimized tables.sql file
3. Click on Execute
NOTE: “Command(s) completed successfully.” message appears in the
Messages tab
You have now created one disk-based table and a few memory
optimized tables in ContosoInMemory database which you will use to
test their performance in the next scenario.
Insert data into one of the memory optimized tables.
1. From the File menu, select Open and File…
Exploring In-Memory OLTP 16
2. Browse to open C:\SQLSCRIPTS\E2\E2E-2#2-Inserting data
into memory-optimized table.sql file
3. Click on Execute
NOTE: This query may take approximately 2 minutes. Data inserted into
the table is shown in the Results tab.
Having been able to successfully create the memory-optimized tables
with data stored, you are now confident you can access the data in an
efficient manner for reporting requirements and other Business
Intelligence applications which will have better performance than
existing database.
Exploring In-Memory OLTP 17
With the database and tables now in place, you are ready to focus on
creating native compiled stored procedures to access data from
memory optimized tables.
Native compilation refers to the process of converting programming
constructs to native C code, consisting of processor instructions that
can be executed directly by the CPU, without the need for further
compilation or interpretation.
In-Memory OLTP introduces the concept of native compilation to SQL
Server 2014. SQL Server can natively compile stored procedures that
access memory-optimized tables, and, in fact, also natively compiles
memory-optimized tables themselves. The In-Memory OLTP compiler
in SQL Server 2014 takes the table and stored procedures definitions as
input. It generates C code, and leverages the Visual C compiler to
generate the native code
Native compilation allows faster data access and more efficient query
execution than traditional interpreted (disk-based) Transact-SQL
provides. The result of the compilation of tables and stored procedures
are DLLs that are loaded in memory and linked into the SQL Server
process
Create native compiled and interpreted stored procedures
1. From the File menu, select Open and File…
2. Browse to C:\SQLSCRIPTS\E2\E2E-3#1-Creating native
compiled and interpreted stored procedures.sql file
3. Click Open
Natively Compiled Stored Procedures to Access Memory-Optimized Data
Exploring In-Memory OLTP 18
4. Click on Execute
NOTE: The “Command(s) completed successfully.” message appears in
the Messages tab
You have now created native compiled and interpreted stored
procedures to run against your in-memory and disk-based tables, which
you will use to test performance
Testing performance of disk-based table using interpreted T-SQL
1. In the Object Explorer, expand the Databases node and select
ContosoInMemory
This will mean the next script is run against this database
2. From the File menu, select Open and File…
3. Browse to open C:\SQLSCRIPTS\E2\E2E-3#2-Performance
test 1.sql file
NOTE: Make sure the script is run against ContosoInMemory database
(the dropdown box next to the Execute button reads ContosoInMemory.
If it does not, click the drop-down arrow on this box and select
ContosoInMemory.)
Exploring In-Memory OLTP 19
4. Click on Execute
NOTE: The output of the execution to compare it against other
executions to follow (e.g., Disk-based table and interpreted Transact-
Exploring In-Memory OLTP 20
SQL: 50000ms approximately). This script inserts 100000 records in a
disk based product table
Testing performance of disk-based table using interpreted stored
procedure
1. From the File menu, select Open and File…
2. Browse to open C:\SQLSCRIPTS\E2\E2E-3#2-Performance
test 2.sql file
NOTE: Make sure the script is run against ContosoInMemory database
(check the dropdown box next to the Execute button still reads
ContosoInMemory.)
3. Click on Execute
NOTE: This query might take approximately 9 minutes to finish
executing
Exploring In-Memory OLTP 21
Compare the output of this execution against other executions (i.e. Disk-
based table and interpreted stored procedure: 565400ms
approximately).This script inserts 100000 records in a disk based product
table.
Testing performance of memory-optimized table using
interpreted T-SQL
1. From the File menu, select Open and File…
2. Browse to open C:\SQLSCRIPTS\E2\E2E-3#3-Performance
test 3.sql file
NOTE: Make sure the script is run against ContosoInMemory database
(check the dropdown box next to the Execute button still reads
ContosoInMemory.)
Exploring In-Memory OLTP 22
3. Click on Execute
Note the output of the execution to compare it against the other
execution time recorded (e.g., Memory-optimized table w/hash index
Exploring In-Memory OLTP 23
and interpreted Transact-SQL: 4500ms approximately). This script
inserts 100000 records in a memory optimized table
Testing performance of memory-optimized table using
interpreted stored procedure
1. From the File menu, select Open and File…
2. Browse to open C:\SQLSCRIPTS\E2\E2E-3#4-Performance
test 4.sql file
NOTE: Make sure the script is run against ContosoInMemory database
(check the dropdown box next to the Execute button still reads
ContosoInMemory.)
3. Click on Execute
NOTE: This query might take a few minutes to finish executing
Exploring In-Memory OLTP 24
Note the output of the execution to compare it against the other
execution time recorded (e.g., Memory-optimized table w/hash index
and interpreted stored procedure: 179000ms). This script inserts 100000
records in a memory optimized table.
Test performance of memory-optimized table using native
compiled stored procedure
1. From the File menu, select Open and File…
2. Browse to open C:\SQLSCRIPTS\E2\E2E-3#5-Performance
test 5.sql file
NOTE: Make sure the script is run against ContosoInMemory database
(check the dropdown box next to the Execute button still reads
ContosoInMemory.)
Exploring In-Memory OLTP 25
3. Click on Execute
Exploring In-Memory OLTP 26
Note the output of the execution to compare it against other execution
time recorded (e.g., Memory-optimized table w/hash index and native
stored procedure: 3000ms approximately)
Compare the results obtained from the performance tests. Note the
performance offered by memory-optimized tables is fully realized when
data in a memory-optimized table is accessed from a natively compiled
stored procedure
You should notice that when you compare the result obtained from the
first and last tests ran over different intervals, performance has improved
by about 10 times.
You were able to create a SQL database with memory-optimized tables
and native compiled stored procedures. You also looked at the
performance of using various options and understands which option
would give you good performance gain for your database and
application
Exploring In-Memory OLTP 27
For the existing IT team, adapting to SQL 2014 has to be easy for them
to migrate their existing on-premise database to their new In-Memory
OLTP solution. Without this ease of initial deployment there is an
increased sense of inertia for an organizations to keep using the old
databases. Also, there should be a simple method to identify the tables
that need to be using memory optimization feature for performance
gains in the existing disk-based database solution
As your organization is looking forward to being able to answer their
problems quickly and is relying on the IT team to migrate the existing
disk-based databases to an efficient in-memory OLTP database using
SQL 2014. The challenge in front of them is to identify which tables
need converting into memory-optimized table for performance gains
and figuring the existing bottleneck with SQL 2012 database solution.
They do not want to re-develop everything from scratch and wish to
retain most of the existing code if possible.
You iare happy with the new database solution your created in the
prior exercise, however, you are not looking to convert all the tables in
the existing database to in-memory. You need to identify which tables
should be converted into memory optimized table to add more value
and performance gains to the existing database with minimal effort.
You decidesd to use the Analysis, Migrate and Report (AMR) tool to
help you achieve your goals.
The AMR tool can be used to provide recommendations as to what
tables and procedures you might want to consider migrating to In-
Memory OLTP. AMR uses the Management Data Warehouse (MDW),
using the data collector, produces reports, available when right-
clicking on the MDW database, and choosing Reports, Management
Data Warehouse. You will then have the option to choose Transaction
Performance Analysis Overview.
Based on recommendations from the Management Data Warehouse
reports, you can start converting tables to be memory-optimized
tables, one at a time, starting with ones that would benefit most from
the memory optimized structures. Once your tables have been
converted, you can then start planning a rewrite of the code into
natively compiled stored procedures, again starting with the ones that
the Management Data Warehouse reports indicate would benefit the
most.
Configure the Management Data Warehouse
1. Close all open editor windows in SQL Server Management
Studio.
Migrating to In-Memory OLTP
Analysis, Migrate and Report
Exploring In-Memory OLTP 28
2. Check if SQL Server Agent is running in the SQLONE
connection by right-clicking on it in the Object Explorer. If
Start is not greyed out, click it and confirm you want to start
the service.
3. In Object Explorer, expand the Management node
4. Right-click Data Collection, and then click on Tasks and then
Configure Management Data Warehouse
5. This starts the Configure Management Data Warehouse
Wizard
6. Click on Next in the welcome page
Exploring In-Memory OLTP 29
7. In the Configure Management Data Warehouse Wizard
Storage page, click on New button to create a new MDW
database.
8. Enter MDW as the database name and click OK
9. In the configuration wizard, select MDW database from the
Database name dropdown
Exploring In-Memory OLTP 30
10. In the configuration wizard, click on Next to continue
11. Click on Next
NOTE: the current user/login is automatically mapped to all three roles.
The Map User and Logins is for mapping any other users, which is not
necessary for this click-through.
Exploring In-Memory OLTP 31
12. Review the summary and click on Finish
NOTE: This will take approximately 1 minute to configure
13. Click on Close to exit the wizard
Configure the data collection
1. In Object Explorer, expand the Management node (if this
node is already expanded, right-click on it and select Refresh)
2. Right-click Data Collection, and then Tasks and Configure
Data Collection
Exploring In-Memory OLTP 32
This starts the Configure Data Collection Wizard
3. Click on the ellipses (…) next to server name and connect to
SQLONE using Windows Authentication
4. Click Connect
5. In Database name select MDW
Optionally, users can select the location to store cache collected data by
clicking on the ellipses (…) button next to Cache directory textbox
Exploring In-Memory OLTP 33
6. Check both the options under data collector sets (System
Data Collection Sets and Transaction performance Collection
Sets)
7. Click on Next
8. If presented with a summary screen, click Finish
NOTE: If you get an error, ensure the SQL Server Agent is running
(repeat step 3)
NOTE: This step will take approximately 2 minutes to complete
Exploring In-Memory OLTP 34
9. Click on Close to exit the wizard
Generate the AMR Reports to identify performance-critical tables
and stored procedures:
1. From the File menu, select Open and File…
2. Browse to C:\SQLSCRIPTS\E2\E2F-1#1-Queries to show DB
activity.sql and select Open
Exploring In-Memory OLTP 35
3. Click on Execute
We execute the above script to show a sample database activity for
generating AMR reports. These scripts will take approximately 8 minutes
to execute and log activities.
4. In Object Explorer, expand Management (if this node is
already expanded, right-click on it and select Refresh)
5. Expand Data Collection
Exploring In-Memory OLTP 36
6. Right click on Stored Procedure Usage Analysis and select
Collect and upload Now
7. Click on Close after the collection and uploaded data succeeds
Exploring In-Memory OLTP 37
8. Right click on Table Usage Analysis and select Collect and
upload Now
9. Click on Close after the collection and upload data succeeds
10. In the Object Explorer expand the Databases node (if this
node is already open, right-click on it and select Refresh) right-
click on the MDW database and select Reports, Management
Data Warehouse and Transaction Performance Analysis
Overview
NOTE: If you get an error “A data source instance has not been supplied
for the data source ‘ScanOverview’”, this means that data has not yet
been uploaded to the MDW database. You can try specifying steps 6 and
8 above again and then wait a fuew minutes for processing to complete.
When the Transaction Performance Analysis Overview report opens up.
It will display two sections, Tables Analysis and Stored Procedure
Analysis
Exploring In-Memory OLTP 38
The Tables Analysis section provides two criteria, Usage Analysis and
Contention Analysis.
Exploring In-Memory OLTP 39
The Stored Procedure Analysis section provides Usage Analysis criteria
Clicking on any of the links provided in the report takes you to a chart
which shows the top database objects (table/stored procedure) for
memory optimization based on the criteria selected from the overview
page
In the table charts, the horizontal axis represents decreasing effort of
memory optimization, while the vertical axis represents increasing
benefits of memory optimization. Top right corner of the chart
represents the database objects which will provide maximum
performance gain with minimal migration effort
NOTE: If you do not see any data in the chart, wait for 15 minutes for the
jobs to refresh, then update the logs and then refresh the report
11. Click on different options for number of Tables/Stored
Procedure section to show more or less items on the chart
12. Click on ContosoRetailDWInMemory databases in the list
under Select database section to filter the chart data based on
operation performed on ContosoRetailDWInMemory tables
and stored procedures. If ContosoRetailDWInMemory is not
present, wait 15 minutes for the data collector to update then
reload the analysis report. You can complete later scenarios and
stories while waiting.
Exploring In-Memory OLTP 40
The table(s) and stored procedure(s) which are potential candidates for
migrating to in-memory OLAP. We will see how to migrate in the next
section
You used the AMR tool and accomplished the following tasks:
You have analyzed the workload in the existing database to
determine if In-Memory OLTP will improve performance. You
then reviewed recommends tables and stored procedures that
would benefit most from conversion to In-Memory OLTP
You are able to plan and execute the migration of existing
database to In-Memory OLTP. The migration path from a disk
based table to a memory-optimized table can be time
consuming. Using Memory-Optimization Advisor, you can
identify the incompatibilities in existing tables that you must
remove before moving the table to In-Memory OLTP. The
Memory-Optimization Advisor also helped you to understand the
impact that the migration of a table to a memory-optimized table
will have on an application
Exploring In-Memory OLTP 41
When running In-Memory OLTP, SQL Server will need to be configured
with sufficient memory to hold all the memory-optimized tables.
Failure to allocate sufficient memory will cause transactions to fail at
run-time during any operations that require additional memory
After migrating the database, you are looking for an instant report of
the current memory used by memory-optimized tables and their
indexes so that you can manage the memory requirements for the
database accordingly. You will use the new report available in SQL 2014
– “Memory Usage by Memory Optimized Object” report that will give
an answer to you concerns
This report shows you the space used by the table rows and the
indexes, as well as the small amount of space used by the system. Hash
indexes will have memory allocated for the declared number of
buckets as soon as they’re created, this report will also show memory
usage for those indexes before any rows are inserted. For range
indexes, memory will not be allocated until rows are added, and the
memory requirement will depend on the size of the index keys and the
number of rows.
Note, we are assuming DimDate table and usp_GetDate stored
procedure are the bottleneck identified in previous story and needed
to be memory optimized for performance gain. We are choosing this
table due to memory restrictions on the server. Scripts can be used to
work on a other tables and procedures
Using Memory Optimization Advisor to migrate tables:
1. In Object Explorer, expand Databases node followed by the
ContosoRetailDWInMemory database and then expand
Tables node
Memory Usage by Memory-Optimized Object Report
Exploring In-Memory OLTP 42
2. Right click on DimDate table which needs to be memory
optimized and select Memory Optimization Advisor
3. Click Next on the welcome page
Exploring In-Memory OLTP 43
4. Memory Optimization Advisor goes through a checklist to
see if the table meets all the necessary requirements to migrate
it to in-memory
5. If all the checks pass, click on Next. Else, click on the failure
message to identify the problem. Fix the issue before
proceeding with migration.
Exploring In-Memory OLTP 44
6. Click Next through the Memory Optimization Warnings
page
7. Enter ContosoRetailDWInmemory_mod as the filegroup
name and ContosoRetailDWInmemory_mod as the logical
file name if not automatically populated
8. Enter the file path that is C:\data. Alternatively, click on the
ellipses (…) button to browse for the location to store the
logical file and browse to C:\data
9. In Rename the original table as box, provide the name to be
given to the existing table (use DimDate_old)
Exploring In-Memory OLTP 45
10. Check the Also copy data to the new memory optimized
table option and click Next
NOTE: Do not check the last check box in this page, as it would clear
table data after server restart
11. Select Datekey from the table by checking on one or more
columns listed
12. Provide a name for the primary key in the text box below i.e.
DimDate_primarykey
13. Select Use NONCLUSTERED HASH index option and provide
a value in the Bucket Count text box (about 100000.)
Exploring In-Memory OLTP 46
14. Click Next
15. Verify the summary and click on Migrate
Exploring In-Memory OLTP 47
NOTE: Click on Script button if you want to generate a script of the
migration operation. We will not be using such a script here.
16. Click on OK to exit the wizard when all Results are passed
(note, comments are not a problem)
17. Optionally, migrate more tables to in-memory by repeating
steps 4 through 18 for the desired tables
Exploring In-Memory OLTP 48
The new in-memory table(s) appear at the end of the tables list in Object
Explorer pane
You have migrated the disk-based table(s) suggested by AMR tool to in-
memory tables. You could now test you queries/reports against the table
to track performance gain
Using Native Compilation Advisor to identify unsupported T-SQL
statements in a stored procedure and migrate it to Native stored
procedure:
1. In Object Explorer, expand Databases node followed by
ContosoRetailDWInMemory database, Programmability
and Stored procedures nodes (if any of these nodes are
already expanded, right-click on the node and click Refresh)
Exploring In-Memory OLTP 49
2. Right click on usp_GetDate – the stored procedure which
needs to be converted to native compiled stored procedure -
and select Native Compilation Advisor
3. Click Next on the welcome page
Exploring In-Memory OLTP 50
If there are any validation issues click on Next to read more about the
unsupported T-SQL statements in the stored procedure
4. Click on Cancel to exit the wizard
5. If you had any issues in the Native Compilation Advisor,
modify the T-SQL statements highlighted to enable converting
it to native compiled stored procedure
6. To alter the interpreted SQL stored procedure (i.e.
usp_GetDate) to be native compiled, do the following changes:
Right Click usp_GetDate, select Script Stored Procedure
as and choose DROP And CREATE To script of the stored
procedure in a New query window
Replace the AS statement after the CREATE PROCEDURE
statement with the following script:
WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE
AS OWNER
AS
BEGIN ATOMIC
WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT,
LANGUAGE = N'english')
Exploring In-Memory OLTP 51
Add END statement at the end before GO
The end result should look like:
Click on Execute to execute the query and so change the
stored procedure (note, this will not produce any results.)
You haves now successfully migrated your interpreted stored procedure
to a native compiled stored procedure
After looking at all the relevant memory utilization reports, you are
now confident about memory allocation and set all plans required for
the future growth in memory needed for the servers and optimize the
infrastructure to suit the reporting/application needs.
Being able to configure and migrate the disk-based database to In-
Memory database in SQL 2014 in an efficient manner, there are more
strategies that can be implemented to gain lightning speed
performance for reporting queries and applications. The business
demands for better performance from minutes to seconds to
milliseconds and the new features in SQL 2014 cater to these requests
with not only providing out of the box features but also with common
practices and improved indexing
Strategies to Gain Performance
Exploring In-Memory OLTP 52
In evaluating the application which processes sales orders on the
Contoso database. There is a lot of processing on recent orders and not
a lot of processing on old orders. You will implement an application
level partitioning technique to gain further query performance. You
decide that old orders will be in a disk based table and orders after the
“hot date” are in the memory-optimized tables
Creating hot and cold tables:
1. From File menu, select Open and File…
2. Browse to open C:\SQLSCRIPTS\E2\E2G-1#1-Create
tables.sql file
3. Click on Execute
You have created two tables to hold new and old data of a slow
performing table. New data would be stored in the memory optimized
table and old (or not frequently used) data would be inserted/moved to
the disk-based table. You also created a third table which holds a value
to mark the split between the two tables. In this case, you use a date.
Creating stored procedure to set the split value:
1. From File menu, select Open and File…
2. Browse to open C:\SQLSCRIPTS\E2\E2G-1#2-Create stored
procedure.sql file
Application-Level Partitioning
Exploring In-Memory OLTP 53
3. Click on Execute
You have now created stored procedures to insert date into split date
table and also order data into appropriate tables based on the split date
Validating data insertion:
1. From File menu, select Open and File…
2. Browse to open C:\SQLSCRIPTS\E2\E2G-1#3-Testing data
insertion.sql file
Exploring In-Memory OLTP 54
3. Click on Execute
Exploring In-Memory OLTP 55
You have successfully validated data insertion into hot and cold tables
based on the split date
Validating movement of data when changing split value:
1. From File menu, select Open and File…
2. Browse to open C:\SQLSCRIPTS\E2\E2G-1#4-Change split
value.sql file
3. Click on Execute then close everything without saving
Exploring In-Memory OLTP 56
You have successfully changed the split value and validated the data
movement between the hot and cold tables based on the changed split
value. You could now, apply the same logic to the table(s) used by your
slow performing application
After implementing the application level partitioning, you observed
the drastic performance improvement in day to day queries.
© 2014 Microsoft Corporation. All rights reserved.
By using this Hands-on Lab, you agree to the following terms:
The technology/functionality described in this Hands-on Lab is
provided by Microsoft Corporation in a “sandbox” testing environment
for purposes of obtaining your feedback and to provide you with a
learning experience. You may only use the Hands-on Lab to evaluate
such technology features and functionality and provide feedback to
Microsoft. You may not use it for any other purpose. You may not
modify, copy, distribute, transmit, display, perform, reproduce, publish,
license, create derivative works from, transfer, or sell this Hands-on Lab
or any portion thereof.
COPYING OR REPRODUCTION OF THE HANDS-ON LAB (OR ANY
PORTION OF IT) TO ANY OTHER SERVER OR LOCATION FOR FURTHER
REPRODUCTION OR REDISTRIBUTION IS EXPRESSLY PROHIBITED.
THIS HANDS-ONLAB PROVIDES CERTAIN SOFTWARE
TECHNOLOGY/PRODUCT FEATURES AND FUNCTIONALITY,
INCLUDING POTENTIAL NEW FEATURES AND CONCEPTS, IN A
SIMULATED ENVIRONMENT WITHOUT COMPLEX SET-UP OR
INSTALLATION FOR THE PURPOSE DESCRIBED ABOVE. THE
TECHNOLOGY/CONCEPTS REPRESENTED IN THIS HANDS-ON LAB
MAY NOT REPRESENT FULL FEATURE FUNCTIONALITY AND MAY NOT
Terms of use
Exploring In-Memory OLTP 57
WORK THE WAY A FINAL VERSION MAY WORK. WE ALSO MAY NOT
RELEASE A FINAL VERSION OF SUCH FEATURES OR CONCEPTS. YOUR
EXPERIENCE WITH USING SUCH FEATURES AND FUNCITONALITY IN A
PHYSICAL ENVIRONMENT MAY ALSO BE DIFFERENT.
FEEDBACK. If you give feedback about the technology features,
functionality and/or concepts described in this Hands-on Lab to
Microsoft, you give to Microsoft, without charge, the right to use, share
and commercialize your feedback in any way and for any purpose. You
also give to third parties, without charge, any patent rights needed for
their products, technologies and services to use or interface with any
specific parts of a Microsoft software or service that includes the
feedback. You will not give feedback that is subject to a license that
requires Microsoft to license its software or documentation to third
parties because we include your feedback in them. These rights survive
this agreement.
MICROSOFT CORPORATION HEREBY DISCLAIMS ALL WARRANTIES
AND CONDITIONS WITH REGARD TO THE HANDS-ON LAB ,
INCLUDING ALL WARRANTIES AND CONDITIONS OF
MERCHANTABILITY, WHETHER EXPRESS, IMPLIED OR STATUTORY,
FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-
INFRINGEMENT. MICROSOFT DOES NOT MAKE ANY ASSURANCES OR
REPRESENTATIONS WITH REGARD TO THE ACCURACY OF THE
RESULTS, OUTPUT THAT DERIVES FROM USE OF THE VIRTUAL LAB, OR
SUITABILITY OF THE INFORMATION CONTAINED IN THE VIRTUAL LAB
FOR ANY PURPOSE.
DISCLAIMER
This lab contains only a portion of new features and enhancements in
Microsoft SQL Server 2014. Some of the features might change in
future releases of the product. In this lab, you will learn about some,
but not all, new features.