Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Help: Huge transaction log (13 GB)

42 views
Skip to first unread message

Christian Hamel

unread,
Jun 21, 2001, 4:38:42 PM6/21/01
to
Hello.

We've setted up a Merge Replication process 2 months ago. Now I look at
my database that is being Replicated and my database size is approx 3 GB and
I have a transaction log of 13 GB. Is it normal ? I've tried to truncate
it but the size doesnt change, so I assume there's no garbadge to truncate.
Am I right or there's a big problem ?

Thank you for any help.

Christian


Rick Sawtell

unread,
Jun 21, 2001, 5:28:23 PM6/21/01
to
Christian,

Are you trying to truncate the log or shrink the log? If the t-log is set
to autogrowth, it might have expanded to that size very easily. When you
backup the log it should truncate the data in the log, but not shrink the
file size of the log. To do that you need to run a DBCC SHRINKFILE command.

This is stolen wholesale from Neil Pike and Andrew Zanevsky. I forgot where
I grabbed it from, but this will help.


Q. Why won't my log shrink in SQL 7?

A. This is because the log file is internally divided into smaller virtual
logs and shrinking will only happen when the active part of the log is at
the beginning of your log file. You can use the DBCC LOGINFO(dbname) command
to see where the active part of the transaction log is - any logs marked
with a Status of 2 are active.

The output below shows that the active virtual log file is the last in the
physical file, therefore no real shrinkage is possible.

FileId FileSize StartOffset FSeqNo Status Parity CreateTime
------ -------- ----------- ------ ------ ------ -----------------------
2 13041664 8192 0 0 0 1999-03-16 10:27:24.917
2 13041664 13049856 0 0 0 1999-03-16 10:25:56.730
2 13041664 26091520 0 0 0 1999-03-16 10:25:56.730
2 13041664 39133184 0 0 0 1999-03-16 10:25:56.730
2 13041664 52174848 0 0 0 1999-03-16 10:25:56.730
2 13041664 65216512 0 0 0 1999-03-16 10:25:56.730
2 13041664 78258176 0 0 0 1999-03-16 10:25:56.730
2 13557760 91299840 5 2 64 1999-04-06 12:32:27.833

In order to be able to shrink the log :-

1. Backup or truncate the log to make sure earlier Virtual Log Files are
reusable (check their status)
2. Execute dummy transactions (in a loop) against a test table until the
active virtual log moves back to the start of the list.
3. Execute DBCC SHRINKDATABASE or DBCC SHRINKFILE to mark a shrinkpoint.
(The actual shrink is performed asynchronously normally)
4. Issue a backup or truncate log command to force the shrinkage to happen
immediately.

If it is at the end of the log file you could write a small while loop that
does some inserts in a test table to move the active part to the beginning
of the real file. Then the SHRINKFILE command should work ok - note that
SHRINKFILE works asynchronously.

As a last resort you can always checkpoint the database, shutdown SQL Server
and delete the physical log file. When SQL restarts it will create a new 2Mb
log device. Note that this is unsupported by Microsoft and whilst it has
always worked for the author, others have reported problems doing this.

The following stored procedure was first published in the February 2000
issue of Microsoft SQL Server Professional, Pinnacle Publishing. Reproduced
here by kind permission of Andrew Zanevsky.

use master
go
if object_id( 'sp_force_shrink_log' ) is not null drop proc
sp_force_shrink_log
go
create proc sp_force_shrink_log
----------------------------------------------------------------------------
----
-- Purpose: Shrink transaction log of the current database in SQL Server
7.0.
-- Author: Andrew Zanevsky, AZ Databases, Inc., 12/25/1999, v5 - 08/18/2000
-- zane...@azdatabases.com
----------------------------------------------------------------------------
----
@target_percent tinyint = 0,
@target_size_MB int = 10,
@max_iterations int = 1000,
@backup_log_opt nvarchar(1000) = 'with truncate_only'
as
set nocount on

declare @db sysname,
@last_row int,
@log_size decimal(15,2),
@unused1 decimal(15,2),
@unused decimal(15,2),
@shrinkable decimal(15,2),
@iteration int,
@file_max int,
@file int,
@fileid varchar(5)

select @db = db_name(),
@iteration = 0

create table #loginfo (
id int identity,
FileId int,
FileSize numeric(22,0),
StartOffset numeric(22,0),
FSeqNo int,
Status int,
Parity smallint,
CreateTime datetime
)

create unique clustered index loginfo_FSeqNo on #loginfo ( FSeqNo,
StartOffset )

create table #logfiles ( id int identity(1,1), fileid varchar(5) not null )
insert #logfiles ( fileid ) select convert( varchar, fileid ) from sysfiles
where status & 0x40 = 0x40
select @file_max = @@rowcount

if object_id( 'table_to_force_shrink_log' ) is null
exec( 'create table table_to_force_shrink_log ( x nchar(3000) not null )' )

insert #loginfo ( FileId, FileSize, StartOffset, FSeqNo, Status, Parity,
CreateTime ) exec ( 'dbcc loginfo' )
select @last_row = @@rowcount

select @log_size = sum( FileSize ) / 1048576.00,
@unused = sum( case when Status = 0 then FileSize else 0 end ) / 1048576.00,
@shrinkable = sum( case when id < @last_row - 1 and Status = 0 then FileSize
else 0 end ) / 1048576.00
from #loginfo

select @unused1 = @unused -- save for later

select 'iteration' = @iteration,
'log size, MB' = @log_size,
'unused log, MB' = @unused,
'shrinkable log, MB' = @shrinkable,
'shrinkable %' = convert( decimal(6,2), @shrinkable * 100 / @log_size )

while @shrinkable * 100 / @log_size > @target_percent
and @shrinkable > @target_size_MB
and @iteration < @max_iterations begin
select @iteration = @iteration + 1 -- this is just a precaution

exec( 'insert table_to_force_shrink_log select name from sysobjects
delete table_to_force_shrink_log')

select @file = 0
while @file < @file_max begin
select @file = @file + 1
select @fileid = fileid from #logfiles where id = @file
exec( 'dbcc shrinkfile( ' + @fileid + ' )' )
end

exec( 'backup log [' + @db + '] ' + @backup_log_opt )

truncate table #loginfo
insert #loginfo ( FileId, FileSize, StartOffset, FSeqNo, Status, Parity,
CreateTime ) exec ( 'dbcc loginfo' )
select @last_row = @@rowcount

select @log_size = sum( FileSize ) / 1048576.00,
@unused = sum( case when Status = 0 then FileSize else 0 end ) / 1048576.00,
@shrinkable = sum( case when id < @last_row - 1 and Status = 0 then FileSize
else 0 end ) / 1048576.00
from #loginfo

select 'iteration' = @iteration,
'log size, MB' = @log_size,
'unused log, MB' = @unused,
'shrinkable log, MB' = @shrinkable,
'shrinkable %' = convert( decimal(6,2), @shrinkable * 100 / @log_size )
end

if @unused1 < @unused
select 'After ' + convert( varchar, @iteration ) +
' iterations the unused portion of the log has grown from ' +
convert( varchar, @unused1 ) + ' MB to ' +
convert( varchar, @unused ) + ' MB.'
union all
select 'Since the remaining unused portion is larger than 10 MB,' where
@unused > 10
union all
select 'you may try running this procedure again with a higher number of
iterations.' where @unused > 10
union all
select 'Sometimes the log would not shrink to a size smaller than several
Megabytes.' where @unused <= 10

else
select 'It took ' + convert( varchar, @iteration ) +
' iterations to shrink the unused portion of the log from ' +
convert( varchar, @unused1 ) + ' MB to ' +
convert( varchar, @unused ) + ' MB'

exec( 'drop table table_to_force_shrink_log' )
go

===

v1.05 2000.06.13
Applies to SQL Server versions : 7.0
FAQ Categories : Database Administration
Related FAQ articles : n/a
Related Microsoft Kb articles : n/a
Other related information : n/a
Authors : Neil Pike, Andrew Zanevsky


Christian Hamel

unread,
Jun 21, 2001, 5:40:14 PM6/21/01
to
Is it safe to launch this Force_Shrink_Log procedure on a replicated
database ?
Does my user on the field that are replicating on this database will not
have any problem next time they'll try to synchronize ?

"Rick Sawtell" <r_sa...@hotmail.com> wrote in message
news:Om53Ojp#AHA.1952@tkmsftngp04...

Tom Casey

unread,
Jun 22, 2001, 1:40:06 AM6/22/01
to
Merge replication does not do any change tracking in the database log so it
will not be impacted by actions, like truncate, on the log nor will it
impact those actions.

You should be safe to truncate or shrink the log.


Christian Hamel

unread,
Jun 22, 2001, 7:14:43 AM6/22/01
to
Ok then. Thanks a lot for this information, I feel a little bit better :)
Yes!! I will recover 13 GB.

Regards,
Christian


"Tom Casey" <tom...@microsoft.com> wrote in message
news:OSBFQ3t#AHA.1492@tkmsftngp05...

Christian Hamel

unread,
Jun 22, 2001, 8:48:10 AM6/22/01
to
When I try to run the store procedure, I got this error message:
Server: Msg 8114, Level 16, State 4, Procedure sp_force_shrink_log, Line 0
Error converting data type varchar to tinyint.

Any idea?

"Rick Sawtell" <r_sa...@hotmail.com> wrote in message
news:Om53Ojp#AHA.1952@tkmsftngp04...

Christian Hamel

unread,
Jun 22, 2001, 9:02:36 AM6/22/01
to
In fact, this is the message I got:

Server: Msg 8115, Level 16, State 2, Line 1
Arithmetic overflow error converting expression to data type datetime.


"Christian Hamel" <cha...@technilab.NOSPAM.ca> wrote in message
news:OBTXukx#AHA.1300@tkmsftngp07...

Anders Klinting

unread,
Aug 6, 2001, 10:55:44 AM8/6/01
to
I had the same problem (under SQL Server 2000) - I just changed the datetime
to nvarchar(255) in the statement

create table #loginfo (
id int identity,
FileId int,
FileSize numeric(22,0),
StartOffset numeric(22,0),
FSeqNo int,
Status int,
Parity smallint,

CreateTime nvarchar(255)
)

The output from the 'dbcc loginfo' tat goes into the CreateTime field is
apparently something else in SQL 2000 . It is not used by this stored
procedure so I figured the change was safe.

And it work like a charm...

Anders


"Christian Hamel" <cha...@technilab.NOSPAM.ca> wrote in message

news:#Xqo0sx#AHA.1120@tkmsftngp07...

0 new messages