Large DML in merge replication also results in Locking, High I/O and slow response time; because changes applied at the subscribers are row by row.
So, you might need to find out exactly which table and which rows are pending to be replicated.
SCRIPT!
-- run on Publisher/Subscriber
-- run on Publisher/Subscriber
-- NOTE: if the table is published in multiple publications number or rows
-- returned by the below queries would be articlechanges*subscribers
-- Finding inserted/update records
select distinct a.name , b.generation,b.coldate transaction_time, c.rowguid changed_row ,b.changecount tran_count
from sysmergearticles a
join sysobjects ab on ab.name=a.name
join msmerge_genhistory b on a.nickname=b.art_nick
join msmerge_contents c on c.generation=b.generation and c.tablenick=b.art_nick
where b.genstatus =0
order by c.rowguid
-- Finding deleted records
select a.name table_name , b.generation, b.coldate transaction_time , t.rowguid changed_row, b.changecount
from sysmergearticles a
join MSmerge_tombstone t on a.nickname=t.tablenick
join msmerge_genhistory b on a.nickname=b.art_nick and b.generation=t.generation
where b.genstatus=0
order by coldate desc
from sysmergearticles a
join MSmerge_tombstone t on a.nickname=t.tablenick
join msmerge_genhistory b on a.nickname=b.art_nick and b.generation=t.generation
where b.genstatus=0
order by coldate desc
In the Next post, we will discuss on how to skip replicating the large DML rows for a particular table and apply the change manually on the subscriber to restore the normal merge process and saving business time.