In-Memory OLTP Overview and Usage Scenarios

Overview and Usage Scenarios

  • High-throughput and low-latency transaction processing
  • Data ingestion, including IoT (Internet-of-Things)
  • Caching and session state
  • Tempdb object replacement
  • ETL (Extract Transform Load)

High-throughput and low-latency transaction processing

This is really the core scenario for which we built In-Memory OLTP: support large volumes of transactions, with consistent low latency for individual transactions.

Common workload scenarios are: trading of financial instruments, sports betting, mobile gaming, and ad delivery. Another common pattern we’ve seen is a “catalog” that is frequently read and/or updated. One example is where you have large files, each distributed over a number of nodes in a cluster, and you catalog the location of each shard of each file in a memory-optimized table.

Implementation considerations

Use memory-optimized tables for your core transaction tables, i.e., the tables with the most performance-critical transactions. Use natively compiled stored procedures to optimize execution of the logic associated with the business transaction. The more of the logic you can push down into stored procedures in the database, the more benefit you will see from In-Memory OLTP.

To get started in an existing application:

  1. use the transaction performance analysis report to identify the objects you want to migrate,
  2. and use the memory-optimization and native compilation advisors to help with migration.

Data ingestion, including IoT (Internet-of-Things)

In-Memory OLTP is really good at ingesting large volumes of data from many different sources at the same time. And it is often beneficial to ingest data into a SQL Server database compared with other destinations, because SQL makes running queries against the data really fast, and allows you to get real-time insights.

Common application patterns are: Ingesting sensor readings and events, to allow notification, as well as historical analysis. Managing batch updates, even from multiple sources, while minimizing the impact on the concurrent read workload.

Implementation considerations

Use a memory-optimized table for the data ingestion. If the ingestion consists mostly of inserts (rather than updates) and In-Memory OLTP storage footprint of the data is a concern, either

  • Use a job to regularly batch-offload data to a disk-based table with a Clustered Columnstore index, using a job that does INSERT INTO <disk-based table> SELECT FROM <memory-optimized table>; or
  • Use a temporal memory-optimized table to manage historical data – in this mode, historical data lives on disk, and data movement is managed by the system.

Caching and session state

The In-Memory OLTP technology makes SQL really attractive for maintaining session state (e.g., for an ASP.NET application) and for caching.

Implementation considerations

You can use non-durable memory-optimized tables as a simple key-value store by storing a BLOB in a varbinary(max) columns. Alternatively, you can implement a semi-structured cache with JSON support in SQL Server and Azure SQL Database. Finally, you can create a full relational cache through non-durable tables with a full relational schema, including various data types and constraints.

Tempdb object replacement

Leverage non-durable tables and memory-optimized table types to replace your traditional tempdb-based #temp tables, table variables, and table-valued parameters (TVPs).

Memory-optimized table variables and non-durable tables typically reduce CPU and completely remove log IO, when compared with traditional table variables and #temp table.

Implementation considerations

To get started see: Improving temp table and table variable performance using memory optimization.

ETL (Extract Transform Load)

ETL workflows often include load of data into a staging table, transformations of the data, and load into the final tables.

Implementation considerations

Use non-durable memory-optimized tables for the data staging. They completely remove all IO, and make data access more efficient.

If you perform transformations on the staging table as part of the workflow, you can use natively compiled stored procedures to speed up these transformations. If you can do these transformations in parallel you get additional scaling benefits from the memory-optimization.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s