Hi,
Rebuilding indexes on large tables is becoming impractical since the log
file size is increasing to the same size as the table with the index being
rebuilt. changing the database recovery model is not an option in some cases
(for example if you use log shipping), so if you have a 100GB table you'll
probably need ~80GB log file space to rebuild a clustered index.
Is there a way to work around this? or do we start creating transaction log
files on partitions equal to the size of the data file doubling database
storage requirements?
Regards,
ADThere's really no much you can do about this. You could try DBCC INDEXDEFRAG and see if that result
in less logging, it may or may not. Another option is to do regular backups during the rebuild is
running so you gram the log files "for each table". Or spread out the rebuilding of indexes over the
week, for instance. A major thing is to not rebuild if there is no fragmentation in the first place.
There's sample code for this in Books Online, under DBCC SHOWCONTIG.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"AD" <AD@.discussions.microsoft.com> wrote in message
news:89F4B74A-29CA-4579-9A48-C0C791550C59@.microsoft.com...
> Hi,
> Rebuilding indexes on large tables is becoming impractical since the log
> file size is increasing to the same size as the table with the index being
> rebuilt. changing the database recovery model is not an option in some cases
> (for example if you use log shipping), so if you have a 100GB table you'll
> probably need ~80GB log file space to rebuild a clustered index.
> Is there a way to work around this? or do we start creating transaction log
> files on partitions equal to the size of the data file doubling database
> storage requirements?
> Regards,
> AD
>|||Thanks for your reply, INDEXDEFRAG produces same log size and is less
effective, I've tried scheduling a log backup but the job doesn't start while
the index rebuild job is running, it waits until the rebuild completes. I
don't think there is a way to break down a rebuild job on a large index in a
particular table.
Thanks,
AD
"Tibor Karaszi" wrote:
> There's really no much you can do about this. You could try DBCC INDEXDEFRAG and see if that result
> in less logging, it may or may not. Another option is to do regular backups during the rebuild is
> running so you gram the log files "for each table". Or spread out the rebuilding of indexes over the
> week, for instance. A major thing is to not rebuild if there is no fragmentation in the first place.
> There's sample code for this in Books Online, under DBCC SHOWCONTIG.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
> Blog: http://solidqualitylearning.com/blogs/tibor/
>
> "AD" <AD@.discussions.microsoft.com> wrote in message
> news:89F4B74A-29CA-4579-9A48-C0C791550C59@.microsoft.com...
> > Hi,
> >
> > Rebuilding indexes on large tables is becoming impractical since the log
> > file size is increasing to the same size as the table with the index being
> > rebuilt. changing the database recovery model is not an option in some cases
> > (for example if you use log shipping), so if you have a 100GB table you'll
> > probably need ~80GB log file space to rebuild a clustered index.
> >
> > Is there a way to work around this? or do we start creating transaction log
> > files on partitions equal to the size of the data file doubling database
> > storage requirements?
> >
> > Regards,
> > AD
> >
>
Thursday, March 8, 2012
DB optimisation and transaction Log size
Labels:
becoming,
database,
file,
impractical,
increasing,
index,
indexes,
log,
microsoft,
mysql,
optimisation,
oracle,
rebuilding,
server,
size,
sql,
table,
tables,
transaction
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment