Displays transactions pending in transaction log at publication database
- Allows for skipping commands at Log Reader Agent
- Large update you may not want propagated (e.g. millions of rows you don’t want replicated)
- Situations where re-initialize would be too time consuming or costly
- This procedure can be used in emergency situations to allow truncation of the transaction log when transactions pending replication are present.
- Use as an exception, not as rule
Flushes ‘article cache’ on transaction log and allows the Log Reader to resume work (only one log reading process can be used for a database at a time)
Sample taken from Joe Sack’s Pluralsight course on Transactional Replication Fundamentals
CREATE EVENT SESSION ExtendedEventsSessionName
ADD EVENT sqlos.wait_info(
WHERE ([package0].[equal_uint64]([sqlserver].[session_id],(61—spid)) OR
ADD TARGET package0.asynchronous_file_target
(SET FILENAME = N’C:\TEMP\xet.xel’, — CHECK that these are cleared
METADATAFILE = N’C:\TEMP\xet.xem’);
ALTER EVENT SESSION ExtendedEventsSessionName
ON SERVER STATE = START;
—-Let the workload run
ALTER EVENT SESSION ExtendedEventsSessionName
ON SERVER STATE = STOP;
— Raw data into intermediate table
— (Make sure you’ve cleared out previous target files!)
SELECT CAST(event_data as XML) event_data
— Aggregated data into intermediate table
(‘(/event/action[@name=”session_id”]/value)’, ‘smallint’) as session_id,
(‘(/event/data[@name=”wait_type”]/text)’, ‘varchar(100)’) as wait_type,
(‘(/event/data[@name=”duration”]/value)’, ‘bigint’) as duration,
(‘(/event/data[@name=”signal_duration”]/value)’, ‘bigint’) as signal_duration,
(‘(/event/data[@name=”completed_count”]/value)’, ‘bigint’) as completed_count
— Final result set
GROUP BY session_id,
ORDER BY session_id,
Disaster recovery and multi-site scenarios
A traditional availability group requires that all servers be part of the same WSFC cluster, which can make spanning multiple data centers challenging.
Distributed availability groups offer a more flexible deployment scenario for availability groups that span multiple data centers. You can even use distributed availability groups where features such as log shipping were used in the past. However, unlike traditional availability groups, distributed availability groups cannot have delayed application of transactions. This means that availability groups or distributed availability groups cannot help in the event of human error in which data is incorrectly updated or deleted.
Distributed availability groups are loosely coupled, which in this case means that they don’t require a single WSFC cluster and they’re maintained by SQL Server. Because the WSFC clusters are maintained individually and the synchronization is primarily asynchronous between the two availability groups, it’s easier to configure disaster recovery at another site. The primary replicas in each availability group synchronize their own secondary replicas.
Migrate by using a distributed availability group
Because distributed availability groups support two completely different availability group configurations, they enable not only easier disaster-recovery and multi-site scenarios, but also migration scenarios. Whether you are migrating to new hardware or virtual machines (on-premises or IaaS in the public cloud), configuring a distributed availability group allows a migration to occur where, in the past, you might have used backup, copy, and restore, or log shipping.
The ability to migrate is especially useful in scenarios where you’re changing or upgrading the underlying OS while you keep the same SQL Server version. Although Windows Server 2016 does allow a rolling upgrade from Windows Server 2012 R2 on the same hardware, most users choose to deploy new hardware or virtual machines.
To complete the migration to the new configuration, at the end of the process, stop all data traffic to the original availability group, and change the distributed availability group to synchronous data movement. This action ensures that the primary replica of the second availability group is fully synchronized, so there would be no data loss. After you’ve verified the synchronization, fail over the distributed availability group to the secondary availability group.
Scale out readable replicas with distributed availability groups
A single distributed availability group can have up to 16 secondary replicas, as needed. So it can have up 18 copies for reading, including the two primary replicas of the different availability groups. This approach means that more than one site can have near-real-time access for reporting to various applications.