Saturday, February 25, 2012

DB Maintenance Recommendations

I've read BOL on establishing a DB Maintenance Plan using the Wizard but it
really fails to make any real-world recommendations (that I can find) about
timing and options. I want to create a plan for an web application
production database of about 1.3gb in size with no scheduled downtime.
Are there any references/websites that list generally-accepted
recommendations for all of the options?
For example, how often should the system DBs be verified and backed up? How
often should indexes be rebuilt? Once a day? Once a week? Once a month? If
it runs an integrity check and tries to fix minor problems, won't that make
my DB (and my web application) unavailable to online users since it switches
to single-user mode? And so on.
Thanks for any pointers.Don Miller wrote:
> I've read BOL on establishing a DB Maintenance Plan using the Wizard but i
t
> really fails to make any real-world recommendations (that I can find) abou
t
> timing and options. I want to create a plan for an web application
> production database of about 1.3gb in size with no scheduled downtime.
> Are there any references/websites that list generally-accepted
> recommendations for all of the options?
> For example, how often should the system DBs be verified and backed up? Ho
w
> often should indexes be rebuilt? Once a day? Once a week? Once a month? If
> it runs an integrity check and tries to fix minor problems, won't that mak
e
> my DB (and my web application) unavailable to online users since it switch
es
> to single-user mode? And so on.
> Thanks for any pointers.
>
Finally, somebody sees through the wizards and realizes the complexity
they are hiding! Sorry, I hate those things... :-)
My suggestions:
1. Run weekly full backups, and frequent transaction log backups during
the week. I have a script that will do all of this for you:
http://realsqlguy.com/serendipity/a...-Send-In-Backup!.html
2. Rebuild indexes nightly or weekly, based on their level of
fragmentation. I have a script that I run nightly, rebuilding only
those that are fragmented beyond 30%:
[url]http://realsqlguy.com/serendipity/archives/12-Humpty-Dumpty-Sat-On-A-Wall...html[
/url]
3. As far as the integrity checks, I maintain a "maintenance" copy of
my databases, which is created by restoring the weekly full backup. Run
your DBCC checks against this database. If errors exist in that
database, they also exist in the live database. You can deal with them
in a planned outage in the live database. Having this maintenance
database also gives you a "test" area that you can use to analyze data
problems, performance problems, etc..
Hope that helps!
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||"Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
news:44EDCBB5.5040808@.realsqlguy.com...

> 1. Run weekly full backups, and frequent transaction log backups during
> the week. I have a script that will do all of this for you:
> http://realsqlguy.com/serendipity/a...-Send-In-Backup!.html
I really can't afford to lose any data, but could settle for a day's worth
loss, and have little space for hundreds of transaction logs. Was thinking
about complete backups nightly and differential throughout the day.

> 2. Rebuild indexes nightly or weekly, based on their level of
> fragmentation. I have a script that I run nightly, rebuilding only
> those that are fragmented beyond 30%:
>
[url]http://realsqlguy.com/serendipity/archives/12-Humpty-Dumpty-Sat-On-A-Wall...html[
/url]
I have not ever rebuilt indexes or even know how to check if they are in
trouble (there must be some way to check). My DB only deals with thousands
of rows and so there has been no noticeable degradation in performance.

> 3. As far as the integrity checks, I maintain a "maintenance" copy of
> my databases, which is created by restoring the weekly full backup. Run
> your DBCC checks against this database. If errors exist in that
> database, they also exist in the live database. You can deal with them
> in a planned outage in the live database. Having this maintenance
> database also gives you a "test" area that you can use to analyze data
> problems, performance problems, etc..
Great idea!|||Don Miller wrote:
> "Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
> news:44EDCBB5.5040808@.realsqlguy.com...
>
> I really can't afford to lose any data, but could settle for a day's worth
> loss, and have little space for hundreds of transaction logs. Was thinking
> about complete backups nightly and differential throughout the day.
>
You can modify my script to make it backup nightly. Find the first IF
statement, remove the entire line, and replace it with:
IF GETDATE() - @.dtLastFullBackup > 1

> http://realsqlguy.com/serendipity/a.....htm
l
> I have not ever rebuilt indexes or even know how to check if they are in
> trouble (there must be some way to check). My DB only deals with thousands
> of rows and so there has been no noticeable degradation in performance.
>
This script will do all of that for you, you just have to schedule it to
run.
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||If you can't afford to loose data then how do you justify not having the
disk space for log backups? You can get a 300GB drive for $100.00 these
days. How much will it cost you if you loose data? With that said keep in
mind that a differential backup works by backing up the extents that have
had a change since the last full backup. The change can be just one bit out
of the entire 64K extent but it will back up the entire extent. If you
reindex all your tables your differential will most likely be the size of
your db. Something to think about.
Andrew J. Kelly SQL MVP
"Don Miller" <nospam@.nospam.com> wrote in message
news:u$ogvk5xGHA.4232@.TK2MSFTNGP05.phx.gbl...
> "Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
> news:44EDCBB5.5040808@.realsqlguy.com...
>
> I really can't afford to lose any data, but could settle for a day's worth
> loss, and have little space for hundreds of transaction logs. Was thinking
> about complete backups nightly and differential throughout the day.
>
> http://realsqlguy.com/serendipity/a.....htm
l
> I have not ever rebuilt indexes or even know how to check if they are in
> trouble (there must be some way to check). My DB only deals with thousands
> of rows and so there has been no noticeable degradation in performance.
>
> Great idea!
>|||I've been playing with your script (on my production db moved to my dev
machine) since I have been very remiss in maintaining my database. I ran the
'defrag' option on the db (all indexes were about 50% fragmented), then I
did a SHRINKDB, and saw the size of my .mdf (and backup files) go from 1.3GB
to around 200MB!
Then I read about DBREINDEX and saw that your script handled that too. So, I
ran the rebuild script (on a pristine copy of my prod db without doing the
defrag first), shrunk the database again, and again got the same reduction
in file size. However, when I ran your script in the report mode AFTER the
rebuild and shrink, almost ALL of the files are reported as 75-100%
fragmented!!! Also, there are indexes showing up in the report as being 100%
fragmented that I can't even find. They are all named like
NameOfTable.tNameOfTable with a 't' prefix rather than 'IX_' there.
Is there a reason that after a DBREINDEX all the indexes are reported as
almost 100% fragmented?
What are those indexes with a 't' in their name?
Thanks for any explanation.
"Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
news:44EDD97B.5050604@.realsqlguy.com...
> Don Miller wrote:
during[vbcol=seagreen]
worth[vbcol=seagreen]
thinking[vbcol=seagreen]
> You can modify my script to make it backup nightly. Find the first IF
> statement, remove the entire line, and replace it with:
> IF GETDATE() - @.dtLastFullBackup > 1
>
http://realsqlguy.com/serendipity/a...-A-Wall...html[vbcol=seag
reen]
thousands[vbcol=seagreen]
> This script will do all of that for you, you just have to schedule it to
> run.
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com|||You're right of course. My server is at a dedicated hosting site with RAID 5
(4 drives in the array) with a total of 30gb for two partitions, so it is
not quite that easy, or inexpensive to add even 50 more gigs to my capacity.
They basically want folks to get a new server instead, or will charge $100 a
MONTH more for the extra capacity.
I appreciate your thoughts.
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:eqp%23bb%23xGHA.4912@.TK2MSFTNGP03.phx.gbl...
> If you can't afford to loose data then how do you justify not having the
> disk space for log backups? You can get a 300GB drive for $100.00 these
> days. How much will it cost you if you loose data? With that said keep in
> mind that a differential backup works by backing up the extents that have
> had a change since the last full backup. The change can be just one bit
out
> of the entire 64K extent but it will back up the entire extent. If you
> reindex all your tables your differential will most likely be the size of
> your db. Something to think about.
> --
> Andrew J. Kelly SQL MVP
> "Don Miller" <nospam@.nospam.com> wrote in message
> news:u$ogvk5xGHA.4232@.TK2MSFTNGP05.phx.gbl...
during[vbcol=seagreen]
worth[vbcol=seagreen]
thinking[vbcol=seagreen]
http://realsqlguy.com/serendipity/a...-A-Wall...html[vbcol=seag
reen]
thousands[vbcol=seagreen]
Run[vbcol=seagreen]
>|||Can you post some of the actual results of the showcontig? You have to be
very careful about shrinking the data files. Shrink happens from the end of
the file inward and has to move any data near the end of the file to an
empty slot near the beginning. That shrinking process usually results in
fragmenting your indexes all over again in the process. A rebuild with DBCC
DBREINDEX requires about 1.2 times the size of your db in free space to do a
proper rebuild. This is due to the fact it makes a copy first and then drops
the original indexes and renames them. So you always need lots of free space
for rebuilds to operate properly. The key is to not have too little so as
the rebuild grows the files each time. IN general there is no penalty for
too much free space except for in your case if you simply don't have the
room. Once you find that medium where you have enough space to do your daily
operations and can rebuild your indexes you will find that the fragmentation
will be better after a rebuild. With that said there are other factors in
play as well. If your db is that small I suspect you have a lot of tables or
indexes using mixed extents. Essentially anything under 8 pages will share
as much as 8 different extents for the 8 pages that normally would go all on
1 extent. If you have less than several hundred to a thousand pages in the
table I would not be too concerned with some fragmentation. Have a look at
these as well:
http://www.karaszi.com/SQLServer/info_dont_shrink.asp Shrinking
considerations
http://www.microsoft.com/technet/pr...n/ss2kidbp.mspx
Index Defrag Best Practices 2000
http://www.sql-server-performance.c..._showcontig.asp
Understanding DBCC SHOWCONTIG
http://www.sqlservercentral.com/col...
illfactors.asp
Fill Factors
http://www.sql-server-performance.c...red_indexes.asp
Clustered Indexes
Andrew J. Kelly SQL MVP
"Don Miller" <nospam@.nospam.com> wrote in message
news:uUbZgwDyGHA.3512@.TK2MSFTNGP04.phx.gbl...
> I've been playing with your script (on my production db moved to my dev
> machine) since I have been very remiss in maintaining my database. I ran
> the
> 'defrag' option on the db (all indexes were about 50% fragmented), then I
> did a SHRINKDB, and saw the size of my .mdf (and backup files) go from
> 1.3GB
> to around 200MB!
> Then I read about DBREINDEX and saw that your script handled that too. So,
> I
> ran the rebuild script (on a pristine copy of my prod db without doing the
> defrag first), shrunk the database again, and again got the same reduction
> in file size. However, when I ran your script in the report mode AFTER the
> rebuild and shrink, almost ALL of the files are reported as 75-100%
> fragmented!!! Also, there are indexes showing up in the report as being
> 100%
> fragmented that I can't even find. They are all named like
> NameOfTable.tNameOfTable with a 't' prefix rather than 'IX_' there.
> Is there a reason that after a DBREINDEX all the indexes are reported as
> almost 100% fragmented?
> What are those indexes with a 't' in their name?
> Thanks for any explanation.
>
> "Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
> news:44EDD97B.5050604@.realsqlguy.com...
> during
> worth
> thinking
> http://realsqlguy.com/serendipity/a.....htm
l
> thousands
>|||"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:OCSStLEyGHA.3844@.TK2MSFTNGP06.phx.gbl...
First of all, I REALLY appreciate your expert help. These remarkable changes
in size could be that I shunned maintenance over a period of YEARS. It could
easily be that my DB is in actuality so small (<300mb) that performance
issues do not appear and one of the problems was that fragmented indexes
were continually growing the size of the database unnecessarily.
Also, are the indexes with the small "t" in front of the table name maybe
have something to do with clustered indexes?

> Can you post some of the actual results of the showcontig?
This morning, I started over again after restoring to my dev machine, the
last full (simple) backup from production. The first thing I did was do the
DBREINDEX for all tables in the DB, then a SHRINKDB (all using the Simple
recovery model). The .mdf went 1.4gb to 280mb, and the .ldf went to 2mb.
Here's the showcontig of one table 'Patients' after all of that happened (is
there an easier to read way to post this?):
IndexName IndexId Level Pages Rows MinimumRecordSize MaximumRecordSize
AverageRecordSize ForwardedRecords Extents ExtentSwitches AverageFreeBytes
AveragePageDenisty ScanDensity BestCount ActualCount LogicalFragmentation
ExtentFragmentation
PK_Patients 1 0 176 NULL NULL NULL NULL NULL 0 23 NULL NULL 92 22 24
99 NULL
IX_Patients_ClinicalEDC 2 0 10 NULL NULL NULL NULL NULL 0 2 NULL NULL
67 2 3 90 NULL
IX_Patients_FirstName 3 0 11 NULL NULL NULL NULL NULL 0 4 NULL NULL 40
2 5 100 NULL
IX_Patients_GroupID 4 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL 33 1
3 100 NULL
IX_Patients_LastName 5 0 11 NULL NULL NULL NULL NULL 0 4 NULL NULL 40
2 5 100 NULL
IX_Patients_ProviderID 6 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL 33
1 3 100 NULL
IX_Patients_PersonID 7 0 8 NULL NULL NULL NULL NULL 0 1 NULL NULL 50 1
2 100 NULL
IX_Patients_LastNameSearch 8 0 11 NULL NULL NULL NULL NULL 0 3 NULL
NULL 50 2 4 100 NULL
IX_Patients_FirstNameSearch 9 0 11 NULL NULL NULL NULL NULL 0 1 NULL
NULL 100 2 2 82 NULL
IX_Patients_UserName 10 0 6 NULL NULL NULL NULL NULL 0 0 NULL NULL 100
1 1 83 NULL
IX_Patients_MaritalStatusID 12 0 8 NULL NULL NULL NULL NULL 0 2 NULL
NULL 33 1 3 88 NULL
IX_Patients_updateAuthorID 13 0 8 NULL NULL NULL NULL NULL 0 2 NULL
NULL 33 1 3 75 NULL
IX_Patients_ReligionID 14 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL
33 1 3 75 NULL
IX_Patients_LanguageID 15 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL
33 1 3 75 NULL
IX_Patients_PtRaceID 16 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL 33
1 3 75 NULL
IX_Patients_AuthorID 17 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL 33
1 3 75 NULL
IX_Patients_CtRaceID 39 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL 33
1 3 75 NULL
IX_Patients_FacilityID 40 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL
33 1 3 75 NULL
IX_Patients_OxFinalAuthorDate 49 0 10 NULL NULL NULL NULL NULL 0 4
NULL NULL 40 2 5 90 NULL
IX_Patients_HospitalID 50 0 8 NULL NULL NULL NULL NULL 0 3 NULL NULL
25 1 4 75 NULL
tPatients 255 0 194 15709 79 493 91 0 42 41 546 93 60 25 42 99 90

> shrinking process usually results in
> fragmenting your indexes all over again in the process. A rebuild with
DBCC
> DBREINDEX requires about 1.2 times the size of your db in free space to do
a
> proper rebuild.
> So you always need lots of free space
> for rebuilds to operate properly.
Do you mean free space on the hard drive (which on my dev machine I have
30gb to spare and on the production I have 4-5x the size of the database) or
'space available' within the database file (like you see when one brings up
properties for the database)?

> If your db is that small I suspect you have a lot of tables or
> indexes using mixed extents.
Yes, I have some 90 tables. I don't know what mixed extents means at this
point.

> Essentially anything under 8 pages will share
> as much as 8 different extents for the 8 pages that normally would go all
on
> 1 extent. If you have less than several hundred to a thousand pages in the
> table I would not be too concerned with some fragmentation.
My table with the most rows has about 1500 pages
Have a look at
> these as well:
>
> http://www.karaszi.com/SQLServer/info_dont_shrink.asp Shrinking
> considerations
>
http://www.microsoft.com/technet/pr...n/ss2kidbp.mspx
> Index Defrag Best Practices 2000
> http://www.sql-server-performance.c..._showcontig.asp
> Understanding DBCC SHOWCONTIG
>

> Fill Factors
> [url]http://www.sql-server-performance.com/gv_clustered_indexes.asp" target="_blank">http://www.sqlservercentral.com/col...red_indexes.asp
> Clustered Indexes
>
> --
> Andrew J. Kelly SQL MVP
> "Don Miller" <nospam@.nospam.com> wrote in message
> news:uUbZgwDyGHA.3512@.TK2MSFTNGP04.phx.gbl...
I[vbcol=seagreen]
So,[vbcol=seagreen]
the[vbcol=seagreen]
reduction[vbcol=seagreen]
the[vbcol=seagreen]
http://realsqlguy.com/serendipity/a...-A-Wall...html[vbcol=seag
reen]
performance.[vbcol=seagreen]
to[vbcol=seagreen]
>|||Maybe this is simpler (after DBREINDEX and SHRINKDB):
DBCC SHOWCONTIG scanning 'PtHxEvents' table...
Table: 'PtHxEvents' (1366295927); index ID: 1, database ID: 15
TABLE level scan performed.
- Pages Scanned........................: 1524
- Extents Scanned.......................: 194
- Extent Switches.......................: 194
- Avg. Pages per Extent..................: 7.9
- Scan Density [Best Count:Actual Count]......: 97.95% [191:195]
- Logical Scan Fragmentation ..............: 99.87%
- Extent Scan Fragmentation ...............: 12.89%
- Avg. Bytes Free per Page................: 33.2
- Avg. Page Density (full)................: 99.59%
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.
"Don Miller" <nospam@.nospam.com> wrote in message
news:uo0mBoEyGHA.356@.TK2MSFTNGP02.phx.gbl...
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:OCSStLEyGHA.3844@.TK2MSFTNGP06.phx.gbl...
> First of all, I REALLY appreciate your expert help. These remarkable
changes
> in size could be that I shunned maintenance over a period of YEARS. It
could
> easily be that my DB is in actuality so small (<300mb) that performance
> issues do not appear and one of the problems was that fragmented indexes
> were continually growing the size of the database unnecessarily.
> Also, are the indexes with the small "t" in front of the table name maybe
> have something to do with clustered indexes?
>
> This morning, I started over again after restoring to my dev machine, the
> last full (simple) backup from production. The first thing I did was do
the
> DBREINDEX for all tables in the DB, then a SHRINKDB (all using the Simple
> recovery model). The .mdf went 1.4gb to 280mb, and the .ldf went to 2mb.
> Here's the showcontig of one table 'Patients' after all of that happened
(is
> there an easier to read way to post this?):
> IndexName IndexId Level Pages Rows MinimumRecordSize
MaximumRecordSize
> AverageRecordSize ForwardedRecords Extents ExtentSwitches AverageFreeBytes
> AveragePageDenisty ScanDensity BestCount ActualCount LogicalFragmentation
> ExtentFragmentation
> PK_Patients 1 0 176 NULL NULL NULL NULL NULL 0 23 NULL NULL 92 22 24
> 99 NULL
> IX_Patients_ClinicalEDC 2 0 10 NULL NULL NULL NULL NULL 0 2 NULL
NULL
> 67 2 3 90 NULL
> IX_Patients_FirstName 3 0 11 NULL NULL NULL NULL NULL 0 4 NULL NULL
40
> 2 5 100 NULL
> IX_Patients_GroupID 4 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL 33
1
> 3 100 NULL
> IX_Patients_LastName 5 0 11 NULL NULL NULL NULL NULL 0 4 NULL NULL
40
> 2 5 100 NULL
> IX_Patients_ProviderID 6 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL
33
> 1 3 100 NULL
> IX_Patients_PersonID 7 0 8 NULL NULL NULL NULL NULL 0 1 NULL NULL 50
1
> 2 100 NULL
> IX_Patients_LastNameSearch 8 0 11 NULL NULL NULL NULL NULL 0 3 NULL
> NULL 50 2 4 100 NULL
> IX_Patients_FirstNameSearch 9 0 11 NULL NULL NULL NULL NULL 0 1 NULL
> NULL 100 2 2 82 NULL
> IX_Patients_UserName 10 0 6 NULL NULL NULL NULL NULL 0 0 NULL NULL
100
> 1 1 83 NULL
> IX_Patients_MaritalStatusID 12 0 8 NULL NULL NULL NULL NULL 0 2 NULL
> NULL 33 1 3 88 NULL
> IX_Patients_updateAuthorID 13 0 8 NULL NULL NULL NULL NULL 0 2 NULL
> NULL 33 1 3 75 NULL
> IX_Patients_ReligionID 14 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL
> 33 1 3 75 NULL
> IX_Patients_LanguageID 15 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL
> 33 1 3 75 NULL
> IX_Patients_PtRaceID 16 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL
33
> 1 3 75 NULL
> IX_Patients_AuthorID 17 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL
33
> 1 3 75 NULL
> IX_Patients_CtRaceID 39 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL
33
> 1 3 75 NULL
> IX_Patients_FacilityID 40 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL
> 33 1 3 75 NULL
> IX_Patients_OxFinalAuthorDate 49 0 10 NULL NULL NULL NULL NULL 0 4
> NULL NULL 40 2 5 90 NULL
> IX_Patients_HospitalID 50 0 8 NULL NULL NULL NULL NULL 0 3 NULL NULL
> 25 1 4 75 NULL
> tPatients 255 0 194 15709 79 493 91 0 42 41 546 93 60 25 42 99 90
>
> DBCC
do[vbcol=seagreen]
> a
> Do you mean free space on the hard drive (which on my dev machine I have
> 30gb to spare and on the production I have 4-5x the size of the database)
or
> 'space available' within the database file (like you see when one brings
up
> properties for the database)?
>
> Yes, I have some 90 tables. I don't know what mixed extents means at this
> point.
>
all[vbcol=seagreen]
> on
the[vbcol=seagreen]
> My table with the most rows has about 1500 pages
>
>
> Have a look at
>
http://www.microsoft.com/technet/pr...n/ss2kidbp.mspx
>

dev[vbcol=seagreen]
ran[vbcol=seagreen]
then[vbcol=seagreen]
> I
> So,
> the
> reduction
> the
being[vbcol=seagreen]
as[vbcol=seagreen]
day's[vbcol=seagreen]
IF[vbcol=seagreen]
only[vbcol=seagreen]
>
[url]http://realsqlguy.com/serendipity/archives/12-Humpty-Dumpty-Sat-On-A-Wall...html" target="_blank">http://www.sqlservercentral.com/col...-A-Wall...htmlreen">
are[vbcol=seagreen]
> performance.
> to
>

No comments:

Post a Comment