How to configure MSDTC as Clustered service (Windows 2008+)

https://blogs.technet.microsoft.com/askcore/2009/02/17/how-to-configure-multiple-instances-of-distributed-transaction-coordinator-dtc-on-a-windows-server-failover-cluster-2008/

http://sqlha.com/2013/03/12/how-to-properly-configure-dtc-for-clustered-instances-of-sql-server-with-windows-server-2008-r2/

http://www.michaelsteineke.com/post/2012/06/21/Clustered-DTC-and-Multiple-SQL-Instances.aspx

How many IP addresses are required for 2 node active-passive SQL Server failover cluster instance?

Total : 6 – 7

1 for WSFC cluster network name

1 for SQL Server instance network name

2 (1 on each node) for private/internal/heartbeat network among cluster nodes

2 (1 on each node) for public network access

1 for MSDTC (if MSDTC is configured as clustered service)

2 (1 for each node) for iSCSI HBA/network adapter for communication to SAN (if iSCSI SAN is used). 

HBA is used for offloading iSCSI TCP/IP traffic processing from CPU

Storage basics: IOPS, Latency, Throughput, Queue Depth

https://www.brentozar.com/archive/2013/09/iops-are-a-scam/

https://blog.docbert.org/queue-depth-iops-and-latency/

Queue Depth is the number of I/O requests (SCSI commands) that can be queued at one time on a storage controller. Each I/O request from the host’s initiator HBA to the storage controller’s target adapter consumes a queue entry. Typically, a higher queue depth equates to better performance.

How In-Memory OLTP features work so much faster?

How In-Memory OLTP features work so much faster

How memory-optimized tables perform faster

Dual nature: A memory-optimized table has a dual nature: one representation in active memory, and the other on the hard disk. Each transaction is committed to both representations of the table. Transactions operate against the much faster active memory representation. Memory-optimized tables benefit from the greater speed of active memory versus the disk. Further, the greater nimbleness of active memory makes practical a more advanced table structure that is optimized for speed. The advanced structure is also pageless, so it avoids the overhead and contention of latches and spinlocks.

No locks: The memory-optimized table relies on an optimistic approach to the competing goals of data integrity versus concurrency and high throughput. During the transaction, the table does not place locks on any version of the updated rows of data. This can greatly reduce contention in some high volume systems.

Row versions: Instead of locks, the memory-optimized table adds a new version of an updated row in the table itself, not in tempdb. The original row is kept until after the transaction is committed. During the transaction, other processes can read the original version of the row.

  • When multiple versions of a row are created for a disk-based table, row versions are stored temporarily in tempdb.

Less logging: The before and after versions of the updated rows are held in the memory-optimized table. The pair of rows provides much of the information that is traditionally written to the log file. This enables the system to write less information, and less often, to the log. Yet transactional integrity is ensured.

How native procs perform faster

Converting a regular interpreted stored procedure into a natively compiled stored procedure greatly reduces the number of instructions to execute during run time.

Trade-offs of In-Memory features

Trade-offs of memory-optimized tables

Estimate memory: You must estimate the amount of active memory that your memory-optimized table will consume. Your computer system must have adequate memory capacity to host a memory-optimized table. For details see:

Partition your large table: One way to meet the demand for lots of active memory is to partition your large table into parts in-memory that store hot recent data rows versus other parts on the disk that store cold legacy rows (such as sales orders that have been fully shipped and completed). This partitioning is a manual process of design and implementation. See:

Trade-offs of native procs

  • A natively compiled stored procedure cannot access a disk-based table. A native proc can access only memory-optimized tables.
  • When a native proc runs for its first time after the server or database was most recently brought back online, the native proc must be recompiled one time. This causes a delay before the native proc starts to run.