Storage basics: IOPS, Latency, Throughput, Queue Depth

https://www.brentozar.com/archive/2013/09/iops-are-a-scam/

https://blog.docbert.org/queue-depth-iops-and-latency/

Queue Depth is the number of I/O requests (SCSI commands) that can be queued at one time on a storage controller. Each I/O request from the host’s initiator HBA to the storage controller’s target adapter consumes a queue entry. Typically, a higher queue depth equates to better performance.

Advertisements

How In-Memory OLTP features work so much faster?

How In-Memory OLTP features work so much faster

How memory-optimized tables perform faster

Dual nature: A memory-optimized table has a dual nature: one representation in active memory, and the other on the hard disk. Each transaction is committed to both representations of the table. Transactions operate against the much faster active memory representation. Memory-optimized tables benefit from the greater speed of active memory versus the disk. Further, the greater nimbleness of active memory makes practical a more advanced table structure that is optimized for speed. The advanced structure is also pageless, so it avoids the overhead and contention of latches and spinlocks.

No locks: The memory-optimized table relies on an optimistic approach to the competing goals of data integrity versus concurrency and high throughput. During the transaction, the table does not place locks on any version of the updated rows of data. This can greatly reduce contention in some high volume systems.

Row versions: Instead of locks, the memory-optimized table adds a new version of an updated row in the table itself, not in tempdb. The original row is kept until after the transaction is committed. During the transaction, other processes can read the original version of the row.

  • When multiple versions of a row are created for a disk-based table, row versions are stored temporarily in tempdb.

Less logging: The before and after versions of the updated rows are held in the memory-optimized table. The pair of rows provides much of the information that is traditionally written to the log file. This enables the system to write less information, and less often, to the log. Yet transactional integrity is ensured.

How native procs perform faster

Converting a regular interpreted stored procedure into a natively compiled stored procedure greatly reduces the number of instructions to execute during run time.

Trade-offs of In-Memory features

Trade-offs of memory-optimized tables

Estimate memory: You must estimate the amount of active memory that your memory-optimized table will consume. Your computer system must have adequate memory capacity to host a memory-optimized table. For details see:

Partition your large table: One way to meet the demand for lots of active memory is to partition your large table into parts in-memory that store hot recent data rows versus other parts on the disk that store cold legacy rows (such as sales orders that have been fully shipped and completed). This partitioning is a manual process of design and implementation. See:

Trade-offs of native procs

  • A natively compiled stored procedure cannot access a disk-based table. A native proc can access only memory-optimized tables.
  • When a native proc runs for its first time after the server or database was most recently brought back online, the native proc must be recompiled one time. This causes a delay before the native proc starts to run.

 

In-Memory OLTP Overview and Usage Scenarios

Overview and Usage Scenarios

  • High-throughput and low-latency transaction processing
  • Data ingestion, including IoT (Internet-of-Things)
  • Caching and session state
  • Tempdb object replacement
  • ETL (Extract Transform Load)

High-throughput and low-latency transaction processing

This is really the core scenario for which we built In-Memory OLTP: support large volumes of transactions, with consistent low latency for individual transactions.

Common workload scenarios are: trading of financial instruments, sports betting, mobile gaming, and ad delivery. Another common pattern we’ve seen is a “catalog” that is frequently read and/or updated. One example is where you have large files, each distributed over a number of nodes in a cluster, and you catalog the location of each shard of each file in a memory-optimized table.

Implementation considerations

Use memory-optimized tables for your core transaction tables, i.e., the tables with the most performance-critical transactions. Use natively compiled stored procedures to optimize execution of the logic associated with the business transaction. The more of the logic you can push down into stored procedures in the database, the more benefit you will see from In-Memory OLTP.

To get started in an existing application:

  1. use the transaction performance analysis report to identify the objects you want to migrate,
  2. and use the memory-optimization and native compilation advisors to help with migration.

Data ingestion, including IoT (Internet-of-Things)

In-Memory OLTP is really good at ingesting large volumes of data from many different sources at the same time. And it is often beneficial to ingest data into a SQL Server database compared with other destinations, because SQL makes running queries against the data really fast, and allows you to get real-time insights.

Common application patterns are: Ingesting sensor readings and events, to allow notification, as well as historical analysis. Managing batch updates, even from multiple sources, while minimizing the impact on the concurrent read workload.

Implementation considerations

Use a memory-optimized table for the data ingestion. If the ingestion consists mostly of inserts (rather than updates) and In-Memory OLTP storage footprint of the data is a concern, either

  • Use a job to regularly batch-offload data to a disk-based table with a Clustered Columnstore index, using a job that does INSERT INTO <disk-based table> SELECT FROM <memory-optimized table>; or
  • Use a temporal memory-optimized table to manage historical data – in this mode, historical data lives on disk, and data movement is managed by the system.

Caching and session state

The In-Memory OLTP technology makes SQL really attractive for maintaining session state (e.g., for an ASP.NET application) and for caching.

Implementation considerations

You can use non-durable memory-optimized tables as a simple key-value store by storing a BLOB in a varbinary(max) columns. Alternatively, you can implement a semi-structured cache with JSON support in SQL Server and Azure SQL Database. Finally, you can create a full relational cache through non-durable tables with a full relational schema, including various data types and constraints.

Tempdb object replacement

Leverage non-durable tables and memory-optimized table types to replace your traditional tempdb-based #temp tables, table variables, and table-valued parameters (TVPs).

Memory-optimized table variables and non-durable tables typically reduce CPU and completely remove log IO, when compared with traditional table variables and #temp table.

Implementation considerations

To get started see: Improving temp table and table variable performance using memory optimization.

ETL (Extract Transform Load)

ETL workflows often include load of data into a staging table, transformations of the data, and load into the final tables.

Implementation considerations

Use non-durable memory-optimized tables for the data staging. They completely remove all IO, and make data access more efficient.

If you perform transformations on the staging table as part of the workflow, you can use natively compiled stored procedures to speed up these transformations. If you can do these transformations in parallel you get additional scaling benefits from the memory-optimization.

Data Compression in SQL Server

Row Compression Implementation

Enabling compression only changes the physical storage format of the data that is associated with a data type but not its syntax or semantics. Application changes are not required when one or more tables are enabled for compression. The new record storage format has the following main changes:

  • It reduces the metadata overhead that is associated with the record. This metadata is information about columns, their lengths and offsets. In some cases, the metadata overhead might be larger than the old storage format.
  • It uses variable-length storage format for numeric types (for example integer, decimal, and float) and the types that are based on numeric (for example datetime and money).
  • It stores fixed character strings by using variable-length format by not storing the blank characters.

Page Compression Implementation

Compressing the leaf level of tables and indexes with page compression consists of three operations in the following order:

  1. Row compression
  2. Prefix compression
  3. Dictionary compression