Saturday, February 25, 2012

DB Mirroring in a mixed 64bit/32bit Environment

I have several satellite 32bit installations of SQL2005. I am in the process
of upgrading my main SQL server from SQL2000 to SQL2005. I what to make good
use of 32GB of RAM in this box, so I am considering the 64bit version.
One of the principal features of SQL2005 we want to leverage is DB mirroring
.
Can I mirror 32bit databases on my 32bit servers to my 64bit server and visa
versa? Does SQL2005 even care about the CPU architecture?
Thanks,
--Nick.From BOL "Prerequisites and Recommendations for Database Mirroring":
"The SQL Server on-disk storage format is the same in the 64-bit and 32-bit
environments. Therefore, a database mirroring session can combine server
instances that run in a 32-bit environment and server instances that run in
a
64-bit environment."
Hope this helps,
Ben Nevarez
Senior Database Administrator
AIG SunAmerica
"NickN" wrote:

> I have several satellite 32bit installations of SQL2005. I am in the proce
ss
> of upgrading my main SQL server from SQL2000 to SQL2005. I what to make go
od
> use of 32GB of RAM in this box, so I am considering the 64bit version.
> One of the principal features of SQL2005 we want to leverage is DB mirrori
ng.
> Can I mirror 32bit databases on my 32bit servers to my 64bit server and vi
sa
> versa? Does SQL2005 even care about the CPU architecture?
> Thanks,
> --Nick.|||Thanks very much, Ben. I guess I just skimmed the pages too fast in the BOL.
That'll teach me.
--Nick.
"Ben Nevarez" wrote:
[vbcol=seagreen]
> From BOL "Prerequisites and Recommendations for Database Mirroring":
> "The SQL Server on-disk storage format is the same in the 64-bit and 32-bi
t
> environments. Therefore, a database mirroring session can combine server
> instances that run in a 32-bit environment and server instances that run i
n a
> 64-bit environment."
> Hope this helps,
> Ben Nevarez
> Senior Database Administrator
> AIG SunAmerica
>
> "NickN" wrote:
>

DB Mirroring in a mixed 64bit/32bit Environment

I have several satellite 32bit installations of SQL2005. I am in the process
of upgrading my main SQL server from SQL2000 to SQL2005. I what to make good
use of 32GB of RAM in this box, so I am considering the 64bit version.
One of the principal features of SQL2005 we want to leverage is DB mirroring.
Can I mirror 32bit databases on my 32bit servers to my 64bit server and visa
versa? Does SQL2005 even care about the CPU architecture?
Thanks,
--Nick.From BOL "Prerequisites and Recommendations for Database Mirroring":
"The SQL Server on-disk storage format is the same in the 64-bit and 32-bit
environments. Therefore, a database mirroring session can combine server
instances that run in a 32-bit environment and server instances that run in a
64-bit environment."
Hope this helps,
Ben Nevarez
Senior Database Administrator
AIG SunAmerica
"NickN" wrote:
> I have several satellite 32bit installations of SQL2005. I am in the process
> of upgrading my main SQL server from SQL2000 to SQL2005. I what to make good
> use of 32GB of RAM in this box, so I am considering the 64bit version.
> One of the principal features of SQL2005 we want to leverage is DB mirroring.
> Can I mirror 32bit databases on my 32bit servers to my 64bit server and visa
> versa? Does SQL2005 even care about the CPU architecture?
> Thanks,
> --Nick.|||Thanks very much, Ben. I guess I just skimmed the pages too fast in the BOL.
That'll teach me.
--Nick.
"Ben Nevarez" wrote:
> From BOL "Prerequisites and Recommendations for Database Mirroring":
> "The SQL Server on-disk storage format is the same in the 64-bit and 32-bit
> environments. Therefore, a database mirroring session can combine server
> instances that run in a 32-bit environment and server instances that run in a
> 64-bit environment."
> Hope this helps,
> Ben Nevarez
> Senior Database Administrator
> AIG SunAmerica
>
> "NickN" wrote:
> > I have several satellite 32bit installations of SQL2005. I am in the process
> > of upgrading my main SQL server from SQL2000 to SQL2005. I what to make good
> > use of 32GB of RAM in this box, so I am considering the 64bit version.
> >
> > One of the principal features of SQL2005 we want to leverage is DB mirroring.
> >
> > Can I mirror 32bit databases on my 32bit servers to my 64bit server and visa
> > versa? Does SQL2005 even care about the CPU architecture?
> >
> > Thanks,
> > --Nick.

DB mirroring + load balancing

Is it compatible to set up SQL DB mirroring on 2

different physical servers (High availbility + FULL transaction safety +

Automatic failover with a third server as witness) and simultaneously set up

network load balancing between these 2 servers (option with Windows Server 2003

Enterprise) ?

If yes, which type of licenses do I need and how

many (for SQL server 2005 and Windows Server 2003) with such a configuration

?

- 4 physical servers in total : A, B, C and

D

- load balancing between A and B (same application)

- load balancing between C and D (same application)

- A hosts principal DB X, B hosts mirror of DB

X

- C hosts principal DB Y, D hosts mirror of DB

Y

- D is witness for A and B

- B is witness for C and D

- A and B : only internal clients <

25

- C and D : both internal and external clients

(internal clients <25, number of external clients is unknown (>25)

)

Hoping this description will be helpful enough for

you...

(I am looking for a very high availability

system)

Thank you in advance for your support.

This licensing question is pretty much urgent (for

a bid) : a quick answer would be very appreciated...

Not going in details of your design.

There is one thing I know for sure. MS is considering a "mirror" as 1 license. Confirmed by MS "presales and licensing division". So, whatever license you will bye (Enterprise or Standard depending on hardware you are going to use) - you need two licenses for MS SQL for 4 servers listed in your design.

|||

The database on mirror server is always in the "Restoring" state, and you can't connect to this database. I didn't get how you plan to achieve load balancing with this.

|||

The load balancing is set up for automatic total load transfer on one server when the other one is lost (high availability, no time to activate the back-up server).

When 100% load is taken by one server, I need this server to access (read & write) to the back-up SQL database that shall be a real mirror of the one located on the lost server.

I am not familiar at all with such systems that's why I need validation of the concept I imagined.

According to your comment, mirror database is only here to be able to restore the lost database but, in any case, it could be used as a replacement of the lost one. Am I right ?

Could you recommend me a redundant system architecture maximizing availability ?

Thanks in advance for your expertise.

Yann

|||

Database mirroring allows you to maintain a hot standby of your database.

The mirror database is always in restoring "state", and not available for user connections. When your principal server (the primary server serving the application) fails, you use the mirror to failover. Please note that you don't do a "restore" with a mirror as you mentioned, but failover to the mirror so that the database becomes available to your application.

I suggest reading the following:

http://www.microsoft.com/technet/prodtechnol/sql/2005/dbmirror.mspx

http://www.microsoft.com/technet/prodtechnol/sql/2005/technologies/dbm_best_pract.mspx

http://www.microsoft.com/technet/prodtechnol/sql/2005/dbmirfaq.mspx

SQL Server Books Online for Database Mirroring.

|||

Tom Rizzo has a chapter in his book "Pro SQL Server 2005" from Apress on database mirroring.

He discusses the ability to use High Availability Mirroring (i.e. automated failover) while using the second server for queries. A quick synopsis is:

A Mirror solution only runs as fast as the slowest database server. So, when you write to the primary database, and the mirror is working, then the time to execute that write is the slowest time of writing to the primary database or the mirror database. So, if you are taxing the resources of your mirror database with other kinds of activity, it can slow down your overall application because the primary server won't commit a transaction until the mirror has also committed the same transaction.

That being said, you can still use the mirror for read only access of the data being mirrored throught the use of Snapshots. As I recall, snapshots are only available in the Enterprise version of SQL Server.

Even though Tom only has one chapter on Mirroring, I found it VERY helpful, and a lot easier to wade through than the MS documentation; at lease for establishing a base understanding of the technology.

Cheers,

Ben

|||

When the database mirror is only used for a standby in the case of emergencies, it does NOT need to be separately licensed. If the mirror is used for other purposes, including reporting, then it must be licensed.

Regards,

Matt Hollingsworth

Sr. Program Manager

SQL Server High Availability

|||

Thanks for your reply.

I have then others questions :
* what could I really do with the mirror database after failover of the main database ?
* Could clients still enter new entries or make some modifications on the database ?
* How does the procedure to come back with the main database occur ?

Thanks in advance for you additional information.

Yann

DB mirroring + load balancing

Is it compatible to set up SQL DB mirroring on 2

different physical servers (High availbility + FULL transaction safety +

Automatic failover with a third server as witness) and simultaneously set up

network load balancing between these 2 servers (option with Windows Server 2003

Enterprise) ?

If yes, which type of licenses do I need and how

many (for SQL server 2005 and Windows Server 2003) with such a configuration

?

- 4 physical servers in total : A, B, C and

D

- load balancing between A and B (same application)

- load balancing between C and D (same application)

- A hosts principal DB X, B hosts mirror of DB

X

- C hosts principal DB Y, D hosts mirror of DB

Y

- D is witness for A and B

- B is witness for C and D

- A and B : only internal clients <

25

- C and D : both internal and external clients

(internal clients <25, number of external clients is unknown (>25)

)

Hoping this description will be helpful enough for

you...

(I am looking for a very high availability

system)

Thank you in advance for your support.

This licensing question is pretty much urgent (for

a bid) : a quick answer would be very appreciated...

Not going in details of your design.

There is one thing I know for sure. MS is considering a "mirror" as 1 license. Confirmed by MS "presales and licensing division". So, whatever license you will bye (Enterprise or Standard depending on hardware you are going to use) - you need two licenses for MS SQL for 4 servers listed in your design.

|||

The database on mirror server is always in the "Restoring" state, and you can't connect to this database. I didn't get how you plan to achieve load balancing with this.

|||

The load balancing is set up for automatic total load transfer on one server when the other one is lost (high availability, no time to activate the back-up server).

When 100% load is taken by one server, I need this server to access (read & write) to the back-up SQL database that shall be a real mirror of the one located on the lost server.

I am not familiar at all with such systems that's why I need validation of the concept I imagined.

According to your comment, mirror database is only here to be able to restore the lost database but, in any case, it could be used as a replacement of the lost one. Am I right ?

Could you recommend me a redundant system architecture maximizing availability ?

Thanks in advance for your expertise.

Yann

|||

Database mirroring allows you to maintain a hot standby of your database.

The mirror database is always in restoring "state", and not available for user connections. When your principal server (the primary server serving the application) fails, you use the mirror to failover. Please note that you don't do a "restore" with a mirror as you mentioned, but failover to the mirror so that the database becomes available to your application.

I suggest reading the following:

http://www.microsoft.com/technet/prodtechnol/sql/2005/dbmirror.mspx

http://www.microsoft.com/technet/prodtechnol/sql/2005/technologies/dbm_best_pract.mspx

http://www.microsoft.com/technet/prodtechnol/sql/2005/dbmirfaq.mspx

SQL Server Books Online for Database Mirroring.

|||

Tom Rizzo has a chapter in his book "Pro SQL Server 2005" from Apress on database mirroring.

He discusses the ability to use High Availability Mirroring (i.e. automated failover) while using the second server for queries. A quick synopsis is:

A Mirror solution only runs as fast as the slowest database server. So, when you write to the primary database, and the mirror is working, then the time to execute that write is the slowest time of writing to the primary database or the mirror database. So, if you are taxing the resources of your mirror database with other kinds of activity, it can slow down your overall application because the primary server won't commit a transaction until the mirror has also committed the same transaction.

That being said, you can still use the mirror for read only access of the data being mirrored throught the use of Snapshots. As I recall, snapshots are only available in the Enterprise version of SQL Server.

Even though Tom only has one chapter on Mirroring, I found it VERY helpful, and a lot easier to wade through than the MS documentation; at lease for establishing a base understanding of the technology.

Cheers,

Ben

|||

When the database mirror is only used for a standby in the case of emergencies, it does NOT need to be separately licensed. If the mirror is used for other purposes, including reporting, then it must be licensed.

Regards,

Matt Hollingsworth

Sr. Program Manager

SQL Server High Availability

|||

Thanks for your reply.

I have then others questions :
* what could I really do with the mirror database after failover of the main database ?
* Could clients still enter new entries or make some modifications on the database ?
* How does the procedure to come back with the main database occur ?

Thanks in advance for you additional information.

Yann

DB Mirror Problem

Hi,

I've got a problem with the DB mirroring in SQL 2005.

Configuration:

Server 1 Name: ABC001 NIC#1: IP 10.1.1.1 no DNS Server, no Gateway NIC#2: IP 192.1.1.11

Server 2 Name: ABC002 NIC#1: IP 10.1.1.2 no DNS Server, no Gateway NIC#2: IP 192.1.1.12

NIC#1 is an cross-over between the 2 server, NIC#2 is related to the local network.

I cannot setup the mirror with the built-in tool nor by hand.

With the tool I receive the message:

"

One or more of the server network addresses lacks a fully qualified domain name (FQDN). Specify the FQDN for each server, and click Start Mirroring again.

The syntax for a fully-qualified TCP address is:
TCP://<computer_name>.<domain_segment>[.<domain_segment>]:<port>

"

By hand with (Endpoints are ready, added by the tool):

-- Specify the partner from the principal server

ALTER DATABASE [MyDB] SET PARTNER =

N'TCP://ABC002:5022';

also

ALTER DATABASE [MyDB] SET PARTNER =

N'TCP://10.1.1.1:5022';

makes a problem like "other server cannot be reached"

I can ping the other server and use windows sharing. I have create an host entry for ABC002 to use the NIC#1 alltime.

Has anyone an idea how to resolve this?

Thanks!

The error message is very clear.

Use FQON.

TCP://<computer_name>.<domain_segment>[.<domain_segment>]:<port>,

For example:

TCP://mycomputer.mycompany.com:<port>

DB Mirror

Hello,
SQL Server 2005 SP1 SE on WINDOWS 2003 sp1 both principal and mirror
SQL Serverver Express SP1 on WINDOWS 2003 sp1 Witness
In mirror monitoring witness indicates disconected from mirror.
All else is green.
Can ping witness from mirror and connect via MSSMS.
No errors in Witness SQL server errorlog.
What can I do to debug this? I have drilled into the status and I do not
see any errors.Hi Joe
I think that this may be correct see http://tinyurl.com/2zbgu8
John
"Joe" wrote:

> Hello,
> SQL Server 2005 SP1 SE on WINDOWS 2003 sp1 both principal and mirror
> SQL Serverver Express SP1 on WINDOWS 2003 sp1 Witness
> In mirror monitoring witness indicates disconected from mirror.
> All else is green.
> Can ping witness from mirror and connect via MSSMS.
> No errors in Witness SQL server errorlog.
> What can I do to debug this? I have drilled into the status and I do not
> see any errors.|||Hi John.
Thanks for the reply.
I kinda assumed I had a connectivity problem. Any ideas on how to
debug/resolve it?
Joe
"John Bell" wrote:
[vbcol=seagreen]
> Hi Joe
> I think that this may be correct see http://tinyurl.com/2zbgu8
> John
> "Joe" wrote:
>|||Hi Joe
I am not a replication/mirroring export. You may want to try the replication
news group!
John
"Joe" wrote:
[vbcol=seagreen]
> Hi John.
> Thanks for the reply.
> I kinda assumed I had a connectivity problem. Any ideas on how to
> debug/resolve it?
> Joe
> "John Bell" wrote:
>

DB Mirror

Hello,
SQL Server 2005 SP1 SE on WINDOWS 2003 sp1 both principal and mirror
SQL Serverver Express SP1 on WINDOWS 2003 sp1 Witness
In mirror monitoring witness indicates disconected from mirror.
All else is green.
Can ping witness from mirror and connect via MSSMS.
No errors in Witness SQL server errorlog.
What can I do to debug this? I have drilled into the status and I do not
see any errors.Hi Joe
I think that this may be correct see http://tinyurl.com/2zbgu8
John
"Joe" wrote:
> Hello,
> SQL Server 2005 SP1 SE on WINDOWS 2003 sp1 both principal and mirror
> SQL Serverver Express SP1 on WINDOWS 2003 sp1 Witness
> In mirror monitoring witness indicates disconected from mirror.
> All else is green.
> Can ping witness from mirror and connect via MSSMS.
> No errors in Witness SQL server errorlog.
> What can I do to debug this? I have drilled into the status and I do not
> see any errors.|||Hi John.
Thanks for the reply.
I kinda assumed I had a connectivity problem. Any ideas on how to
debug/resolve it?
Joe
"John Bell" wrote:
> Hi Joe
> I think that this may be correct see http://tinyurl.com/2zbgu8
> John
> "Joe" wrote:
> > Hello,
> > SQL Server 2005 SP1 SE on WINDOWS 2003 sp1 both principal and mirror
> > SQL Serverver Express SP1 on WINDOWS 2003 sp1 Witness
> > In mirror monitoring witness indicates disconected from mirror.
> > All else is green.
> > Can ping witness from mirror and connect via MSSMS.
> > No errors in Witness SQL server errorlog.
> > What can I do to debug this? I have drilled into the status and I do not
> > see any errors.|||Hi Joe
I am not a replication/mirroring export. You may want to try the replication
news group!
John
"Joe" wrote:
> Hi John.
> Thanks for the reply.
> I kinda assumed I had a connectivity problem. Any ideas on how to
> debug/resolve it?
> Joe
> "John Bell" wrote:
> > Hi Joe
> >
> > I think that this may be correct see http://tinyurl.com/2zbgu8
> >
> > John
> >
> > "Joe" wrote:
> >
> > > Hello,
> > > SQL Server 2005 SP1 SE on WINDOWS 2003 sp1 both principal and mirror
> > > SQL Serverver Express SP1 on WINDOWS 2003 sp1 Witness
> > > In mirror monitoring witness indicates disconected from mirror.
> > > All else is green.
> > > Can ping witness from mirror and connect via MSSMS.
> > > No errors in Witness SQL server errorlog.
> > > What can I do to debug this? I have drilled into the status and I do not
> > > see any errors.

DB Mirror

Hello,
SQL Server 2005 SP1 SE on WINDOWS 2003 sp1 both principal and mirror
SQL Serverver Express SP1 on WINDOWS 2003 sp1 Witness
In mirror monitoring witness indicates disconected from mirror.
All else is green.
Can ping witness from mirror and connect via MSSMS.
No errors in Witness SQL server errorlog.
What can I do to debug this? I have drilled into the status and I do not
see any errors.
Hi Joe
I think that this may be correct see http://tinyurl.com/2zbgu8
John
"Joe" wrote:

> Hello,
> SQL Server 2005 SP1 SE on WINDOWS 2003 sp1 both principal and mirror
> SQL Serverver Express SP1 on WINDOWS 2003 sp1 Witness
> In mirror monitoring witness indicates disconected from mirror.
> All else is green.
> Can ping witness from mirror and connect via MSSMS.
> No errors in Witness SQL server errorlog.
> What can I do to debug this? I have drilled into the status and I do not
> see any errors.
|||Hi John.
Thanks for the reply.
I kinda assumed I had a connectivity problem. Any ideas on how to
debug/resolve it?
Joe
"John Bell" wrote:
[vbcol=seagreen]
> Hi Joe
> I think that this may be correct see http://tinyurl.com/2zbgu8
> John
> "Joe" wrote:
|||Hi Joe
I am not a replication/mirroring export. You may want to try the replication
news group!
John
"Joe" wrote:
[vbcol=seagreen]
> Hi John.
> Thanks for the reply.
> I kinda assumed I had a connectivity problem. Any ideas on how to
> debug/resolve it?
> Joe
> "John Bell" wrote:

Db Migration with Identity constraints and their Fkey records

Hi,
we have to migrate data from three tables in 3 different servers into one
table on fourth servers.. and the scenariou is as below
ServerA/Database : One table named as Access with Pkey and identity field.
This identity field is Fkey in another table.
ServerB/Database : One table named as Access with Pkey and identity field.
This identity field is Fkey in another table.
ServerC/Database : One table named as Access with Pkey and identity field.
This identity field is Fkey in another table.
We have to merge data/table of all the above three servers into one
database. We have to merge all records from all three servers into one tabl
e
which will be in another server.
How should we take care of identity issues and fkey constraints as table has
different identity in all three servers and they are related to some other
tables as Fkey Constraints. table structure is same in all servers.
Thanks in advance ,
SunnySunny
> We have to merge data/table of all the above three servers into one
> database. We have to merge all records from all three servers into one
> table
> which will be in another server.
Perhsps by using DTS Package to move the data but you will have to decide
how to store the data because what if these table have the same identity
value. It is up to you to decide. Ome option is to create a group of these
values (i mean identities) from three table and generate a PK to each
group.
"Sunny" <Sunny@.discussions.microsoft.com> wrote in message
news:760AF5E1-D44C-4888-BFB5-B4D61E4F7EB0@.microsoft.com...
> Hi,
> we have to migrate data from three tables in 3 different servers into one
> table on fourth servers.. and the scenariou is as below
> ServerA/Database : One table named as Access with Pkey and identity field.
> This identity field is Fkey in another table.
> ServerB/Database : One table named as Access with Pkey and identity field.
> This identity field is Fkey in another table.
> ServerC/Database : One table named as Access with Pkey and identity field.
> This identity field is Fkey in another table.
> We have to merge data/table of all the above three servers into one
> database. We have to merge all records from all three servers into one
> table
> which will be in another server.
> How should we take care of identity issues and fkey constraints as table
> has
> different identity in all three servers and they are related to some other
> tables as Fkey Constraints. table structure is same in all servers.
> Thanks in advance ,
> Sunny
>|||Thanks, but could you please provide more details what exactly you mean here
.
"Uri Dimant" wrote:

> Sunny
>
> Perhsps by using DTS Package to move the data but you will have to decide
> how to store the data because what if these table have the same identity
> value. It is up to you to decide. Ome option is to create a group of thes
e
> values (i mean identities) from three table and generate a PK to each
> group.
>
> "Sunny" <Sunny@.discussions.microsoft.com> wrote in message
> news:760AF5E1-D44C-4888-BFB5-B4D61E4F7EB0@.microsoft.com...
>
>

DB Migration Project Plan

Anyone has a DB Migration Project Plan template or an example? Pls email it to Nomthandazod@.gpg.gov.zaDO you want to be a little more specific about what you need to do?|||Thanks for responding. I need to present a professional detailed plan of how i'll be moving databases of about 20GB each to a new blade server. One will first have to be converted from a Focus db to SQL Server 2k, and then moved to the new machine. I have designed and done the testing for the conversion, just need to present it and the plan to management!|||Just sent you a quick and easy way to do this, it has an attached zip file and is named move logins. Hope this helps you. Good thing is it will maintain the sid's of the users and db, so little as far as security will need to be changed/modified.

ughh just read the reply you posted, not sure if the one I sent will help you at all.

DB migration

I 've to migrate several DB between two SQL2000 servers .
I'm planning to do it using the detach/attach method which I've been using
successfully several times.
This time , however , I need also to migrate DB maintanence plan, jobs , as
well as DTS .
I've read several KB but couldn't find info regarding migration of
maintanence plan.
As per DTS how's the best method to migrate the DTS definitions ?
ThanksYou can open up your packages and do a save as
stoney
"Stefano Colombo" wrote:

> I 've to migrate several DB between two SQL2000 servers .
> I'm planning to do it using the detach/attach method which I've been using
> successfully several times.
> This time , however , I need also to migrate DB maintanence plan, jobs , a
s
> well as DTS .
> I've read several KB but couldn't find info regarding migration of
> maintanence plan.
> As per DTS how's the best method to migrate the DTS definitions ?
> Thanks
>

DB Metric?

I would like to do a query that would list all of the user tables and
give me counts for columns and rows. Someone probably has already made
that query. Thanks!
-JohnThe undocumented sp_MSforeachtable stored procedure can be used to execute
most any command against each table in a database.
http://www.dbazine.com/sql/sql-articles/larsen5
For example:
EXEC sp_MSforeachtable 'SELECT ''?'', COUNT(*) FROM ?'
"John Baima" <john@.nospam.com> wrote in message
news:rajdq1hqdh7j39emuei3gnhqsa4pt1b352@.
4ax.com...
>I would like to do a query that would list all of the user tables and
> give me counts for columns and rows. Someone probably has already made
> that query. Thanks!
> -John|||See if this helps:
http://groups.google.com/group/micr...2e?dmode=source
Anith|||"JT" <someone@.microsoft.com> wrote:

>The undocumented sp_MSforeachtable stored procedure can be used to execute
>most any command against each table in a database.
>http://www.dbazine.com/sql/sql-articles/larsen5
>For example:
>EXEC sp_MSforeachtable 'SELECT ''?'', COUNT(*) FROM ?'
>
That's interesting. How about the number of columns?
-John|||On Tue, 20 Dec 2005 11:10:46 GMT, John Baima wrote:

>"JT" <someone@.microsoft.com> wrote:
>
>That's interesting. How about the number of columns?
>-John
Hi John,
SELECT TABLE_NAME, COUNT(*)
FROM INFORMATION_SCHEMA.COLUMNS
GROUP BY TABLE_NAME
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)

DB Master Key - Sql Server

This is not working with me on SQL Server 2005:

Code: ( text )

    use XDataBaseCREATE MASTER KEY ENCRYPTION BY PASSWORD = 'password'go

However it was given on MS site:
http://msdn2.microsoft.com/en-us/library/ms174382.aspx
Please advice. Thanks.actually the password must be > 8 chars and contain upper/lower case and numeric/ Non-alphanumeric characters.

Password = 'password' is used having exactly 8 chars,
i think this may causing the problem

DB marked Suspect

I have SQL Express SP1 installed on serveral machines, each has one instance and each instance has more than one database.

Recently, I found out some dbs are marked Suspect. MOM tool reported something like:

The database "your-DB" in the instance "Your-Instance" is in a critical state. The state is: Suspect.

I tried to run "DBCC CHECKDB", but it failed.

Can someone tell me what could cause the problem, and how to fix it?

Thanks,

Peter

This generally occurs when there is a bad restore from a backup. If this is your situation then first attempt to do the restore again. It's telling that the databases MDF file has been corrupted.

If redoing the restore does not work then run this:
DBCC CHECKDB with REPAIR_REBUILD

if that fails then this

DBCC CHECKDB with REPAIR_ALLOW_DATA_LOSS

|||

Thanks for the reply.

In my case, after a few tries, I found out that simply restarting the service of the instance would fix the problem. I did not really had time to debug the problem because customer was waiting. Next time if the problem appears again. I will look into it a little bit deepper.

It seems that the problem occurred after the additional databases were created for the same instance. BTW, we have SQL2005 running on the same machine too, and the SQL browser service running under different user than the user for SQL Express instance.

DB marked Suspect

I have SQL Express SP1 installed on serveral machines, each has one instance and each instance has more than one database.

Recently, I found out some dbs are marked Suspect. MOM tool reported something like:

The database "your-DB" in the instance "Your-Instance" is in a critical state. The state is: Suspect.

I tried to run "DBCC CHECKDB", but it failed.

Can someone tell me what could cause the problem, and how to fix it?

Thanks,

Peter

This generally occurs when there is a bad restore from a backup. If this is your situation then first attempt to do the restore again. It's telling that the databases MDF file has been corrupted.

If redoing the restore does not work then run this:
DBCC CHECKDB with REPAIR_REBUILD

if that fails then this

DBCC CHECKDB with REPAIR_ALLOW_DATA_LOSS|||

Thanks for the reply.

In my case, after a few tries, I found out that simply restarting the service of the instance would fix the problem. I did not really had time to debug the problem because customer was waiting. Next time if the problem appears again. I will look into it a little bit deepper.

It seems that the problem occurred after the additional databases were created for the same instance. BTW, we have SQL2005 running on the same machine too, and the SQL browser service running under different user than the user for SQL Express instance.

DB Maintenenace Plan, Optimizations

Hi All,
I have a database that has not recieved any type of maintenance apart from
daily backups. It has grown to over 30GB. I created some jobs to reindex
tables that I thought were high activity. Query performance increased
substantially but I believe it could be better. Database has also shruck
about 4GB.
After running a showcontig on the table, I could still see lots of
fragmentation on tables that were not included in the job above. So I
created a DB Maintenance Plan to for only optimizations and chose to
reorganize data and index pages, and keep original fillfactor that the index
was created with.
The job ran last night but did not finish because it failed on an unknown
step after 27 mins. I ran this job manually and it ran for 50 mins. before
I
manually stopped the job. The actual database space decreased another 7GB.
The transaction log backup could not be performed becuase there was not
enough disk space, the trans log was 23GB.
My questions are:
How can I enure this optimization job will successfully run in the future?
Is there a way to make the trans log smaller after the optimization job is
run?
Should I be doing a Full DB Backup AFTER the optimization job?
Can I shrink the allocated size of the database by adding to the
optimization job to "remove unused space from database files" safely? I wan
t
to ensure the database will still grow automatically if space is required.
Cheers!First off you should undo the shrink part of the maintenance plan. When you
reindex the tables you need lots of free space in the data and log files in
order for it to do it's job properly. If there isn't it has to grow the
files which is resource intensive. That process will leave free space which
the shrink part of the maintenance plan will not like and then it will
shrink the files. The shrinking process will fragment your tables and
indexes all over again. See here:
http://www.karaszi.com/SQLServer/info_dont_shrink.asp
You should leave the size at what ever it needs to be to do what it has to
do. The exception would be the log file. If it just keeps growing you
probably aren't backing up the log files on a regular basis. Full backups
are not enough if you are in the full recovery mode.
Andrew J. Kelly SQL MVP
"AKing" <AKing@.discussions.microsoft.com> wrote in message
news:F9CBB7CC-A53C-498A-A678-E974558EF4C9@.microsoft.com...
> Hi All,
> I have a database that has not recieved any type of maintenance apart from
> daily backups. It has grown to over 30GB. I created some jobs to reindex
> tables that I thought were high activity. Query performance increased
> substantially but I believe it could be better. Database has also shruck
> about 4GB.
> After running a showcontig on the table, I could still see lots of
> fragmentation on tables that were not included in the job above. So I
> created a DB Maintenance Plan to for only optimizations and chose to
> reorganize data and index pages, and keep original fillfactor that the
> index
> was created with.
> The job ran last night but did not finish because it failed on an unknown
> step after 27 mins. I ran this job manually and it ran for 50 mins.
> before I
> manually stopped the job. The actual database space decreased another
> 7GB.
> The transaction log backup could not be performed becuase there was not
> enough disk space, the trans log was 23GB.
> My questions are:
> How can I enure this optimization job will successfully run in the future?
> Is there a way to make the trans log smaller after the optimization job
> is
> run?
> Should I be doing a Full DB Backup AFTER the optimization job?
> Can I shrink the allocated size of the database by adding to the
> optimization job to "remove unused space from database files" safely? I
> want
> to ensure the database will still grow automatically if space is required.
> Cheers!|||Thank you for your reply. You have provided some helpful insight.
I tried to run the optimization job (w/out shrinking db) and I got the same
behaviour. I had to cancel the job after it had been running for 50 minutes
.
Once again the translog was 21GB.
I understand that the reindex job can be intensive, this wil only multiply
if the translog grows from its usual size (~1gb) to 21GB.
Is there a way to run this job w/out the side effect above?
Will the translogs always be this big when running the optimization?
Maybe the only way is to have nightly reindex of portions of the database
until it has reindexed the entire database.
Cheers
"Andrew J. Kelly" wrote:

> First off you should undo the shrink part of the maintenance plan. When y
ou
> reindex the tables you need lots of free space in the data and log files i
n
> order for it to do it's job properly. If there isn't it has to grow the
> files which is resource intensive. That process will leave free space whi
ch
> the shrink part of the maintenance plan will not like and then it will
> shrink the files. The shrinking process will fragment your tables and
> indexes all over again. See here:
> http://www.karaszi.com/SQLServer/info_dont_shrink.asp
> You should leave the size at what ever it needs to be to do what it has to
> do. The exception would be the log file. If it just keeps growing you
> probably aren't backing up the log files on a regular basis. Full backups
> are not enough if you are in the full recovery mode.
> --
> Andrew J. Kelly SQL MVP
>
> "AKing" <AKing@.discussions.microsoft.com> wrote in message
> news:F9CBB7CC-A53C-498A-A678-E974558EF4C9@.microsoft.com...
>
>|||How large are the tables in the db? You might want to skip the actual
maintenance plan and just create a job with DBCC DBREINDEX instead to just
rebuild the ones that are fragmented above a certain level. Or just do a
portion of them each night. There is a script under DBCC SHOWCONTIG in
BooksOnLine to allow you to choose which tables will be reindexed or even
defragged.
Andrew J. Kelly SQL MVP
"AKing" <AKing@.discussions.microsoft.com> wrote in message
news:F06D716A-0ECA-4EDE-8EBD-547715C543CC@.microsoft.com...
> Thank you for your reply. You have provided some helpful insight.
> I tried to run the optimization job (w/out shrinking db) and I got the
> same
> behaviour. I had to cancel the job after it had been running for 50
> minutes.
> Once again the translog was 21GB.
> I understand that the reindex job can be intensive, this wil only multiply
> if the translog grows from its usual size (~1gb) to 21GB.
> Is there a way to run this job w/out the side effect above?
> Will the translogs always be this big when running the optimization?
> Maybe the only way is to have nightly reindex of portions of the database
> until it has reindexed the entire database.
> Cheers
>
> "Andrew J. Kelly" wrote:
>|||Hi Andrew,
The database allocation is ~36GB and at one time only had 50MB of free
space. Although through reindexing some high activity tables I have managed
to reduce the actual size of the database to ~25GB. I plan to shrink some o
f
the free space as I believe it is space wasted.
"Andrew J. Kelly" wrote:

> How large are the tables in the db? You might want to skip the actual
> maintenance plan and just create a job with DBCC DBREINDEX instead to just
> rebuild the ones that are fragmented above a certain level. Or just do a
> portion of them each night. There is a script under DBCC SHOWCONTIG in
> BooksOnLine to allow you to choose which tables will be reindexed or even
> defragged.
> --
> Andrew J. Kelly SQL MVP
>
> "AKing" <AKing@.discussions.microsoft.com> wrote in message
> news:F06D716A-0ECA-4EDE-8EBD-547715C543CC@.microsoft.com...
>
>

DB Maintenance/Optimizations

When setting up a DB Maintenance plan, what do users
normally do with the options in the Optimizations tab?
Does it matter what application is using the database?
(In this case, it is Microsoft Business Solutions-
Solomon.) Thanks.It matters if the application is 24 x 7. The optimization routines for the
MP use DBCC DBREINDEX and that will take your table off line for the
duration of the reindexing. If you have a maintenance window for this then
it may not be a problem. I usually suggest creating your own scheduled
jobs that do exactly what you need done and when instead of using the MP.
--
Andrew J. Kelly
SQL Server MVP
"Roger" <anonymous@.discussions.microsoft.com> wrote in message
news:05ca01c3cef5$e589f980$a001280a@.phx.gbl...
> When setting up a DB Maintenance plan, what do users
> normally do with the options in the Optimizations tab?
> Does it matter what application is using the database?
> (In this case, it is Microsoft Business Solutions-
> Solomon.) Thanks.|||Is that Maintenance Wizard an inside joke?
Seems to me it's a great way for less-knowledgeable 'dba's' to get bollixed
up.
Other than that, it's like your appendix - it's there but it serves no
useful purpose.
James Hokes
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:%23fn3IwxzDHA.3900@.tk2msftngp13.phx.gbl...
> It matters if the application is 24 x 7. The optimization routines for
the
> MP use DBCC DBREINDEX and that will take your table off line for the
> duration of the reindexing. If you have a maintenance window for this
then
> it may not be a problem. I usually suggest creating your own scheduled
> jobs that do exactly what you need done and when instead of using the MP.
> --
> Andrew J. Kelly
> SQL Server MVP
>
> "Roger" <anonymous@.discussions.microsoft.com> wrote in message
> news:05ca01c3cef5$e589f980$a001280a@.phx.gbl...
> > When setting up a DB Maintenance plan, what do users
> > normally do with the options in the Optimizations tab?
> > Does it matter what application is using the database?
> > (In this case, it is Microsoft Business Solutions-
> > Solomon.) Thanks.
>|||That's what I think the biggest problem of using the Maintenance Wizard is.
When someone new to sqls erver uses the wizard to get going they have no
knowledge of what is actually happening. So if something does go wrong they
are not in a good position to fix it. If they took a few minutes to learn
how to create their own job to do the task they are much more likely to have
a better handle on things overall.
--
Andrew J. Kelly
SQL Server MVP
"James Hokes" <noemail@.noway.com> wrote in message
news:eUkRxb1zDHA.2872@.TK2MSFTNGP09.phx.gbl...
> Is that Maintenance Wizard an inside joke?
> Seems to me it's a great way for less-knowledgeable 'dba's' to get
bollixed
> up.
> Other than that, it's like your appendix - it's there but it serves no
> useful purpose.
> James Hokes
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:%23fn3IwxzDHA.3900@.tk2msftngp13.phx.gbl...
> > It matters if the application is 24 x 7. The optimization routines for
> the
> > MP use DBCC DBREINDEX and that will take your table off line for the
> > duration of the reindexing. If you have a maintenance window for this
> then
> > it may not be a problem. I usually suggest creating your own scheduled
> > jobs that do exactly what you need done and when instead of using the
MP.
> >
> > --
> >
> > Andrew J. Kelly
> > SQL Server MVP
> >
> >
> > "Roger" <anonymous@.discussions.microsoft.com> wrote in message
> > news:05ca01c3cef5$e589f980$a001280a@.phx.gbl...
> > > When setting up a DB Maintenance plan, what do users
> > > normally do with the options in the Optimizations tab?
> > > Does it matter what application is using the database?
> > > (In this case, it is Microsoft Business Solutions-
> > > Solomon.) Thanks.
> >
> >
>

DB Maintenance Recommendations

I've read BOL on establishing a DB Maintenance Plan using the Wizard but it
really fails to make any real-world recommendations (that I can find) about
timing and options. I want to create a plan for an web application
production database of about 1.3gb in size with no scheduled downtime.
Are there any references/websites that list generally-accepted
recommendations for all of the options?
For example, how often should the system DBs be verified and backed up? How
often should indexes be rebuilt? Once a day? Once a week? Once a month? If
it runs an integrity check and tries to fix minor problems, won't that make
my DB (and my web application) unavailable to online users since it switches
to single-user mode? And so on.
Thanks for any pointers.Don Miller wrote:
> I've read BOL on establishing a DB Maintenance Plan using the Wizard but it
> really fails to make any real-world recommendations (that I can find) about
> timing and options. I want to create a plan for an web application
> production database of about 1.3gb in size with no scheduled downtime.
> Are there any references/websites that list generally-accepted
> recommendations for all of the options?
> For example, how often should the system DBs be verified and backed up? How
> often should indexes be rebuilt? Once a day? Once a week? Once a month? If
> it runs an integrity check and tries to fix minor problems, won't that make
> my DB (and my web application) unavailable to online users since it switches
> to single-user mode? And so on.
> Thanks for any pointers.
>
Finally, somebody sees through the wizards and realizes the complexity
they are hiding! Sorry, I hate those things... :-)
My suggestions:
1. Run weekly full backups, and frequent transaction log backups during
the week. I have a script that will do all of this for you:
http://realsqlguy.com/serendipity/archives/11-Send-In-Backup!.html
2. Rebuild indexes nightly or weekly, based on their level of
fragmentation. I have a script that I run nightly, rebuilding only
those that are fragmented beyond 30%:
http://realsqlguy.com/serendipity/archives/12-Humpty-Dumpty-Sat-On-A-Wall...html
3. As far as the integrity checks, I maintain a "maintenance" copy of
my databases, which is created by restoring the weekly full backup. Run
your DBCC checks against this database. If errors exist in that
database, they also exist in the live database. You can deal with them
in a planned outage in the live database. Having this maintenance
database also gives you a "test" area that you can use to analyze data
problems, performance problems, etc..
Hope that helps!
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||"Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
news:44EDCBB5.5040808@.realsqlguy.com...
> 1. Run weekly full backups, and frequent transaction log backups during
> the week. I have a script that will do all of this for you:
> http://realsqlguy.com/serendipity/archives/11-Send-In-Backup!.html
I really can't afford to lose any data, but could settle for a day's worth
loss, and have little space for hundreds of transaction logs. Was thinking
about complete backups nightly and differential throughout the day.
> 2. Rebuild indexes nightly or weekly, based on their level of
> fragmentation. I have a script that I run nightly, rebuilding only
> those that are fragmented beyond 30%:
>
http://realsqlguy.com/serendipity/archives/12-Humpty-Dumpty-Sat-On-A-Wall...html
I have not ever rebuilt indexes or even know how to check if they are in
trouble (there must be some way to check). My DB only deals with thousands
of rows and so there has been no noticeable degradation in performance.
> 3. As far as the integrity checks, I maintain a "maintenance" copy of
> my databases, which is created by restoring the weekly full backup. Run
> your DBCC checks against this database. If errors exist in that
> database, they also exist in the live database. You can deal with them
> in a planned outage in the live database. Having this maintenance
> database also gives you a "test" area that you can use to analyze data
> problems, performance problems, etc..
Great idea!|||Don Miller wrote:
> "Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
> news:44EDCBB5.5040808@.realsqlguy.com...
>> 1. Run weekly full backups, and frequent transaction log backups during
>> the week. I have a script that will do all of this for you:
>> http://realsqlguy.com/serendipity/archives/11-Send-In-Backup!.html
> I really can't afford to lose any data, but could settle for a day's worth
> loss, and have little space for hundreds of transaction logs. Was thinking
> about complete backups nightly and differential throughout the day.
>
You can modify my script to make it backup nightly. Find the first IF
statement, remove the entire line, and replace it with:
IF GETDATE() - @.dtLastFullBackup > 1
>> 2. Rebuild indexes nightly or weekly, based on their level of
>> fragmentation. I have a script that I run nightly, rebuilding only
>> those that are fragmented beyond 30%:
> http://realsqlguy.com/serendipity/archives/12-Humpty-Dumpty-Sat-On-A-Wall...html
> I have not ever rebuilt indexes or even know how to check if they are in
> trouble (there must be some way to check). My DB only deals with thousands
> of rows and so there has been no noticeable degradation in performance.
>
This script will do all of that for you, you just have to schedule it to
run.
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||If you can't afford to loose data then how do you justify not having the
disk space for log backups? You can get a 300GB drive for $100.00 these
days. How much will it cost you if you loose data? With that said keep in
mind that a differential backup works by backing up the extents that have
had a change since the last full backup. The change can be just one bit out
of the entire 64K extent but it will back up the entire extent. If you
reindex all your tables your differential will most likely be the size of
your db. Something to think about.
--
Andrew J. Kelly SQL MVP
"Don Miller" <nospam@.nospam.com> wrote in message
news:u$ogvk5xGHA.4232@.TK2MSFTNGP05.phx.gbl...
> "Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
> news:44EDCBB5.5040808@.realsqlguy.com...
>> 1. Run weekly full backups, and frequent transaction log backups during
>> the week. I have a script that will do all of this for you:
>> http://realsqlguy.com/serendipity/archives/11-Send-In-Backup!.html
> I really can't afford to lose any data, but could settle for a day's worth
> loss, and have little space for hundreds of transaction logs. Was thinking
> about complete backups nightly and differential throughout the day.
>> 2. Rebuild indexes nightly or weekly, based on their level of
>> fragmentation. I have a script that I run nightly, rebuilding only
>> those that are fragmented beyond 30%:
> http://realsqlguy.com/serendipity/archives/12-Humpty-Dumpty-Sat-On-A-Wall...html
> I have not ever rebuilt indexes or even know how to check if they are in
> trouble (there must be some way to check). My DB only deals with thousands
> of rows and so there has been no noticeable degradation in performance.
>> 3. As far as the integrity checks, I maintain a "maintenance" copy of
>> my databases, which is created by restoring the weekly full backup. Run
>> your DBCC checks against this database. If errors exist in that
>> database, they also exist in the live database. You can deal with them
>> in a planned outage in the live database. Having this maintenance
>> database also gives you a "test" area that you can use to analyze data
>> problems, performance problems, etc..
> Great idea!
>|||I've been playing with your script (on my production db moved to my dev
machine) since I have been very remiss in maintaining my database. I ran the
'defrag' option on the db (all indexes were about 50% fragmented), then I
did a SHRINKDB, and saw the size of my .mdf (and backup files) go from 1.3GB
to around 200MB!
Then I read about DBREINDEX and saw that your script handled that too. So, I
ran the rebuild script (on a pristine copy of my prod db without doing the
defrag first), shrunk the database again, and again got the same reduction
in file size. However, when I ran your script in the report mode AFTER the
rebuild and shrink, almost ALL of the files are reported as 75-100%
fragmented!!! Also, there are indexes showing up in the report as being 100%
fragmented that I can't even find. They are all named like
NameOfTable.tNameOfTable with a 't' prefix rather than 'IX_' there.
Is there a reason that after a DBREINDEX all the indexes are reported as
almost 100% fragmented?
What are those indexes with a 't' in their name?
Thanks for any explanation.
"Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
news:44EDD97B.5050604@.realsqlguy.com...
> Don Miller wrote:
> > "Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
> > news:44EDCBB5.5040808@.realsqlguy.com...
> >
> >> 1. Run weekly full backups, and frequent transaction log backups
during
> >> the week. I have a script that will do all of this for you:
> >> http://realsqlguy.com/serendipity/archives/11-Send-In-Backup!.html
> >
> > I really can't afford to lose any data, but could settle for a day's
worth
> > loss, and have little space for hundreds of transaction logs. Was
thinking
> > about complete backups nightly and differential throughout the day.
> >
> You can modify my script to make it backup nightly. Find the first IF
> statement, remove the entire line, and replace it with:
> IF GETDATE() - @.dtLastFullBackup > 1
> >> 2. Rebuild indexes nightly or weekly, based on their level of
> >> fragmentation. I have a script that I run nightly, rebuilding only
> >> those that are fragmented beyond 30%:
> >>
> >
http://realsqlguy.com/serendipity/archives/12-Humpty-Dumpty-Sat-On-A-Wall...html
> >
> > I have not ever rebuilt indexes or even know how to check if they are in
> > trouble (there must be some way to check). My DB only deals with
thousands
> > of rows and so there has been no noticeable degradation in performance.
> >
> This script will do all of that for you, you just have to schedule it to
> run.
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com|||You're right of course. My server is at a dedicated hosting site with RAID 5
(4 drives in the array) with a total of 30gb for two partitions, so it is
not quite that easy, or inexpensive to add even 50 more gigs to my capacity.
They basically want folks to get a new server instead, or will charge $100 a
MONTH more for the extra capacity.
I appreciate your thoughts.
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:eqp%23bb%23xGHA.4912@.TK2MSFTNGP03.phx.gbl...
> If you can't afford to loose data then how do you justify not having the
> disk space for log backups? You can get a 300GB drive for $100.00 these
> days. How much will it cost you if you loose data? With that said keep in
> mind that a differential backup works by backing up the extents that have
> had a change since the last full backup. The change can be just one bit
out
> of the entire 64K extent but it will back up the entire extent. If you
> reindex all your tables your differential will most likely be the size of
> your db. Something to think about.
> --
> Andrew J. Kelly SQL MVP
> "Don Miller" <nospam@.nospam.com> wrote in message
> news:u$ogvk5xGHA.4232@.TK2MSFTNGP05.phx.gbl...
> > "Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
> > news:44EDCBB5.5040808@.realsqlguy.com...
> >
> >> 1. Run weekly full backups, and frequent transaction log backups
during
> >> the week. I have a script that will do all of this for you:
> >> http://realsqlguy.com/serendipity/archives/11-Send-In-Backup!.html
> >
> > I really can't afford to lose any data, but could settle for a day's
worth
> > loss, and have little space for hundreds of transaction logs. Was
thinking
> > about complete backups nightly and differential throughout the day.
> >
> >> 2. Rebuild indexes nightly or weekly, based on their level of
> >> fragmentation. I have a script that I run nightly, rebuilding only
> >> those that are fragmented beyond 30%:
> >>
> >
http://realsqlguy.com/serendipity/archives/12-Humpty-Dumpty-Sat-On-A-Wall...html
> >
> > I have not ever rebuilt indexes or even know how to check if they are in
> > trouble (there must be some way to check). My DB only deals with
thousands
> > of rows and so there has been no noticeable degradation in performance.
> >
> >> 3. As far as the integrity checks, I maintain a "maintenance" copy of
> >> my databases, which is created by restoring the weekly full backup.
Run
> >> your DBCC checks against this database. If errors exist in that
> >> database, they also exist in the live database. You can deal with them
> >> in a planned outage in the live database. Having this maintenance
> >> database also gives you a "test" area that you can use to analyze data
> >> problems, performance problems, etc..
> >
> > Great idea!
> >
> >
>|||Can you post some of the actual results of the showcontig? You have to be
very careful about shrinking the data files. Shrink happens from the end of
the file inward and has to move any data near the end of the file to an
empty slot near the beginning. That shrinking process usually results in
fragmenting your indexes all over again in the process. A rebuild with DBCC
DBREINDEX requires about 1.2 times the size of your db in free space to do a
proper rebuild. This is due to the fact it makes a copy first and then drops
the original indexes and renames them. So you always need lots of free space
for rebuilds to operate properly. The key is to not have too little so as
the rebuild grows the files each time. IN general there is no penalty for
too much free space except for in your case if you simply don't have the
room. Once you find that medium where you have enough space to do your daily
operations and can rebuild your indexes you will find that the fragmentation
will be better after a rebuild. With that said there are other factors in
play as well. If your db is that small I suspect you have a lot of tables or
indexes using mixed extents. Essentially anything under 8 pages will share
as much as 8 different extents for the 8 pages that normally would go all on
1 extent. If you have less than several hundred to a thousand pages in the
table I would not be too concerned with some fragmentation. Have a look at
these as well:
http://www.karaszi.com/SQLServer/info_dont_shrink.asp Shrinking
considerations
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx
Index Defrag Best Practices 2000
http://www.sql-server-performance.com/dt_dbcc_showcontig.asp
Understanding DBCC SHOWCONTIG
http://www.sqlservercentral.com/columnists/jweisbecker/amethodologyfordeterminingfillfactors.asp
Fill Factors
http://www.sql-server-performance.com/gv_clustered_indexes.asp
Clustered Indexes
Andrew J. Kelly SQL MVP
"Don Miller" <nospam@.nospam.com> wrote in message
news:uUbZgwDyGHA.3512@.TK2MSFTNGP04.phx.gbl...
> I've been playing with your script (on my production db moved to my dev
> machine) since I have been very remiss in maintaining my database. I ran
> the
> 'defrag' option on the db (all indexes were about 50% fragmented), then I
> did a SHRINKDB, and saw the size of my .mdf (and backup files) go from
> 1.3GB
> to around 200MB!
> Then I read about DBREINDEX and saw that your script handled that too. So,
> I
> ran the rebuild script (on a pristine copy of my prod db without doing the
> defrag first), shrunk the database again, and again got the same reduction
> in file size. However, when I ran your script in the report mode AFTER the
> rebuild and shrink, almost ALL of the files are reported as 75-100%
> fragmented!!! Also, there are indexes showing up in the report as being
> 100%
> fragmented that I can't even find. They are all named like
> NameOfTable.tNameOfTable with a 't' prefix rather than 'IX_' there.
> Is there a reason that after a DBREINDEX all the indexes are reported as
> almost 100% fragmented?
> What are those indexes with a 't' in their name?
> Thanks for any explanation.
>
> "Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
> news:44EDD97B.5050604@.realsqlguy.com...
>> Don Miller wrote:
>> > "Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
>> > news:44EDCBB5.5040808@.realsqlguy.com...
>> >
>> >> 1. Run weekly full backups, and frequent transaction log backups
> during
>> >> the week. I have a script that will do all of this for you:
>> >> http://realsqlguy.com/serendipity/archives/11-Send-In-Backup!.html
>> >
>> > I really can't afford to lose any data, but could settle for a day's
> worth
>> > loss, and have little space for hundreds of transaction logs. Was
> thinking
>> > about complete backups nightly and differential throughout the day.
>> >
>> You can modify my script to make it backup nightly. Find the first IF
>> statement, remove the entire line, and replace it with:
>> IF GETDATE() - @.dtLastFullBackup > 1
>> >> 2. Rebuild indexes nightly or weekly, based on their level of
>> >> fragmentation. I have a script that I run nightly, rebuilding only
>> >> those that are fragmented beyond 30%:
>> >>
>> >
> http://realsqlguy.com/serendipity/archives/12-Humpty-Dumpty-Sat-On-A-Wall...html
>> >
>> > I have not ever rebuilt indexes or even know how to check if they are
>> > in
>> > trouble (there must be some way to check). My DB only deals with
> thousands
>> > of rows and so there has been no noticeable degradation in performance.
>> >
>> This script will do all of that for you, you just have to schedule it to
>> run.
>>
>> --
>> Tracy McKibben
>> MCDBA
>> http://www.realsqlguy.com
>|||"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:OCSStLEyGHA.3844@.TK2MSFTNGP06.phx.gbl...
First of all, I REALLY appreciate your expert help. These remarkable changes
in size could be that I shunned maintenance over a period of YEARS. It could
easily be that my DB is in actuality so small (<300mb) that performance
issues do not appear and one of the problems was that fragmented indexes
were continually growing the size of the database unnecessarily.
Also, are the indexes with the small "t" in front of the table name maybe
have something to do with clustered indexes?
> Can you post some of the actual results of the showcontig?
This morning, I started over again after restoring to my dev machine, the
last full (simple) backup from production. The first thing I did was do the
DBREINDEX for all tables in the DB, then a SHRINKDB (all using the Simple
recovery model). The .mdf went 1.4gb to 280mb, and the .ldf went to 2mb.
Here's the showcontig of one table 'Patients' after all of that happened (is
there an easier to read way to post this?):
IndexName IndexId Level Pages Rows MinimumRecordSize MaximumRecordSize
AverageRecordSize ForwardedRecords Extents ExtentSwitches AverageFreeBytes
AveragePageDenisty ScanDensity BestCount ActualCount LogicalFragmentation
ExtentFragmentation
PK_Patients 1 0 176 NULL NULL NULL NULL NULL 0 23 NULL NULL 92 22 24
99 NULL
IX_Patients_ClinicalEDC 2 0 10 NULL NULL NULL NULL NULL 0 2 NULL NULL
67 2 3 90 NULL
IX_Patients_FirstName 3 0 11 NULL NULL NULL NULL NULL 0 4 NULL NULL 40
2 5 100 NULL
IX_Patients_GroupID 4 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL 33 1
3 100 NULL
IX_Patients_LastName 5 0 11 NULL NULL NULL NULL NULL 0 4 NULL NULL 40
2 5 100 NULL
IX_Patients_ProviderID 6 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL 33
1 3 100 NULL
IX_Patients_PersonID 7 0 8 NULL NULL NULL NULL NULL 0 1 NULL NULL 50 1
2 100 NULL
IX_Patients_LastNameSearch 8 0 11 NULL NULL NULL NULL NULL 0 3 NULL
NULL 50 2 4 100 NULL
IX_Patients_FirstNameSearch 9 0 11 NULL NULL NULL NULL NULL 0 1 NULL
NULL 100 2 2 82 NULL
IX_Patients_UserName 10 0 6 NULL NULL NULL NULL NULL 0 0 NULL NULL 100
1 1 83 NULL
IX_Patients_MaritalStatusID 12 0 8 NULL NULL NULL NULL NULL 0 2 NULL
NULL 33 1 3 88 NULL
IX_Patients_updateAuthorID 13 0 8 NULL NULL NULL NULL NULL 0 2 NULL
NULL 33 1 3 75 NULL
IX_Patients_ReligionID 14 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL
33 1 3 75 NULL
IX_Patients_LanguageID 15 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL
33 1 3 75 NULL
IX_Patients_PtRaceID 16 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL 33
1 3 75 NULL
IX_Patients_AuthorID 17 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL 33
1 3 75 NULL
IX_Patients_CtRaceID 39 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL 33
1 3 75 NULL
IX_Patients_FacilityID 40 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL
33 1 3 75 NULL
IX_Patients_OxFinalAuthorDate 49 0 10 NULL NULL NULL NULL NULL 0 4
NULL NULL 40 2 5 90 NULL
IX_Patients_HospitalID 50 0 8 NULL NULL NULL NULL NULL 0 3 NULL NULL
25 1 4 75 NULL
tPatients 255 0 194 15709 79 493 91 0 42 41 546 93 60 25 42 99 90
> shrinking process usually results in
> fragmenting your indexes all over again in the process. A rebuild with
DBCC
> DBREINDEX requires about 1.2 times the size of your db in free space to do
a
> proper rebuild.
> So you always need lots of free space
> for rebuilds to operate properly.
Do you mean free space on the hard drive (which on my dev machine I have
30gb to spare and on the production I have 4-5x the size of the database) or
'space available' within the database file (like you see when one brings up
properties for the database)?
> If your db is that small I suspect you have a lot of tables or
> indexes using mixed extents.
Yes, I have some 90 tables. I don't know what mixed extents means at this
point.
> Essentially anything under 8 pages will share
> as much as 8 different extents for the 8 pages that normally would go all
on
> 1 extent. If you have less than several hundred to a thousand pages in the
> table I would not be too concerned with some fragmentation.
My table with the most rows has about 1500 pages
Have a look at
> these as well:
>
> http://www.karaszi.com/SQLServer/info_dont_shrink.asp Shrinking
> considerations
>
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx
> Index Defrag Best Practices 2000
> http://www.sql-server-performance.com/dt_dbcc_showcontig.asp
> Understanding DBCC SHOWCONTIG
>
http://www.sqlservercentral.com/columnists/jweisbecker/amethodologyfordeterminingfillfactors.asp
> Fill Factors
> http://www.sql-server-performance.com/gv_clustered_indexes.asp
> Clustered Indexes
>
> --
> Andrew J. Kelly SQL MVP
> "Don Miller" <nospam@.nospam.com> wrote in message
> news:uUbZgwDyGHA.3512@.TK2MSFTNGP04.phx.gbl...
> > I've been playing with your script (on my production db moved to my dev
> > machine) since I have been very remiss in maintaining my database. I ran
> > the
> > 'defrag' option on the db (all indexes were about 50% fragmented), then
I
> > did a SHRINKDB, and saw the size of my .mdf (and backup files) go from
> > 1.3GB
> > to around 200MB!
> >
> > Then I read about DBREINDEX and saw that your script handled that too.
So,
> > I
> > ran the rebuild script (on a pristine copy of my prod db without doing
the
> > defrag first), shrunk the database again, and again got the same
reduction
> > in file size. However, when I ran your script in the report mode AFTER
the
> > rebuild and shrink, almost ALL of the files are reported as 75-100%
> > fragmented!!! Also, there are indexes showing up in the report as being
> > 100%
> > fragmented that I can't even find. They are all named like
> > NameOfTable.tNameOfTable with a 't' prefix rather than 'IX_' there.
> >
> > Is there a reason that after a DBREINDEX all the indexes are reported as
> > almost 100% fragmented?
> >
> > What are those indexes with a 't' in their name?
> >
> > Thanks for any explanation.
> >
> >
> > "Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
> > news:44EDD97B.5050604@.realsqlguy.com...
> >> Don Miller wrote:
> >> > "Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
> >> > news:44EDCBB5.5040808@.realsqlguy.com...
> >> >
> >> >> 1. Run weekly full backups, and frequent transaction log backups
> > during
> >> >> the week. I have a script that will do all of this for you:
> >> >> http://realsqlguy.com/serendipity/archives/11-Send-In-Backup!.html
> >> >
> >> > I really can't afford to lose any data, but could settle for a day's
> > worth
> >> > loss, and have little space for hundreds of transaction logs. Was
> > thinking
> >> > about complete backups nightly and differential throughout the day.
> >> >
> >>
> >> You can modify my script to make it backup nightly. Find the first IF
> >> statement, remove the entire line, and replace it with:
> >>
> >> IF GETDATE() - @.dtLastFullBackup > 1
> >>
> >> >> 2. Rebuild indexes nightly or weekly, based on their level of
> >> >> fragmentation. I have a script that I run nightly, rebuilding only
> >> >> those that are fragmented beyond 30%:
> >> >>
> >> >
> >
http://realsqlguy.com/serendipity/archives/12-Humpty-Dumpty-Sat-On-A-Wall...html
> >> >
> >> > I have not ever rebuilt indexes or even know how to check if they are
> >> > in
> >> > trouble (there must be some way to check). My DB only deals with
> > thousands
> >> > of rows and so there has been no noticeable degradation in
performance.
> >> >
> >>
> >> This script will do all of that for you, you just have to schedule it
to
> >> run.
> >>
> >>
> >>
> >> --
> >> Tracy McKibben
> >> MCDBA
> >> http://www.realsqlguy.com
> >
> >
>|||Maybe this is simpler (after DBREINDEX and SHRINKDB):
DBCC SHOWCONTIG scanning 'PtHxEvents' table...
Table: 'PtHxEvents' (1366295927); index ID: 1, database ID: 15
TABLE level scan performed.
- Pages Scanned........................: 1524
- Extents Scanned.......................: 194
- Extent Switches.......................: 194
- Avg. Pages per Extent..................: 7.9
- Scan Density [Best Count:Actual Count]......: 97.95% [191:195]
- Logical Scan Fragmentation ..............: 99.87%
- Extent Scan Fragmentation ...............: 12.89%
- Avg. Bytes Free per Page................: 33.2
- Avg. Page Density (full)................: 99.59%
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.
"Don Miller" <nospam@.nospam.com> wrote in message
news:uo0mBoEyGHA.356@.TK2MSFTNGP02.phx.gbl...
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:OCSStLEyGHA.3844@.TK2MSFTNGP06.phx.gbl...
> First of all, I REALLY appreciate your expert help. These remarkable
changes
> in size could be that I shunned maintenance over a period of YEARS. It
could
> easily be that my DB is in actuality so small (<300mb) that performance
> issues do not appear and one of the problems was that fragmented indexes
> were continually growing the size of the database unnecessarily.
> Also, are the indexes with the small "t" in front of the table name maybe
> have something to do with clustered indexes?
> > Can you post some of the actual results of the showcontig?
> This morning, I started over again after restoring to my dev machine, the
> last full (simple) backup from production. The first thing I did was do
the
> DBREINDEX for all tables in the DB, then a SHRINKDB (all using the Simple
> recovery model). The .mdf went 1.4gb to 280mb, and the .ldf went to 2mb.
> Here's the showcontig of one table 'Patients' after all of that happened
(is
> there an easier to read way to post this?):
> IndexName IndexId Level Pages Rows MinimumRecordSize
MaximumRecordSize
> AverageRecordSize ForwardedRecords Extents ExtentSwitches AverageFreeBytes
> AveragePageDenisty ScanDensity BestCount ActualCount LogicalFragmentation
> ExtentFragmentation
> PK_Patients 1 0 176 NULL NULL NULL NULL NULL 0 23 NULL NULL 92 22 24
> 99 NULL
> IX_Patients_ClinicalEDC 2 0 10 NULL NULL NULL NULL NULL 0 2 NULL
NULL
> 67 2 3 90 NULL
> IX_Patients_FirstName 3 0 11 NULL NULL NULL NULL NULL 0 4 NULL NULL
40
> 2 5 100 NULL
> IX_Patients_GroupID 4 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL 33
1
> 3 100 NULL
> IX_Patients_LastName 5 0 11 NULL NULL NULL NULL NULL 0 4 NULL NULL
40
> 2 5 100 NULL
> IX_Patients_ProviderID 6 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL
33
> 1 3 100 NULL
> IX_Patients_PersonID 7 0 8 NULL NULL NULL NULL NULL 0 1 NULL NULL 50
1
> 2 100 NULL
> IX_Patients_LastNameSearch 8 0 11 NULL NULL NULL NULL NULL 0 3 NULL
> NULL 50 2 4 100 NULL
> IX_Patients_FirstNameSearch 9 0 11 NULL NULL NULL NULL NULL 0 1 NULL
> NULL 100 2 2 82 NULL
> IX_Patients_UserName 10 0 6 NULL NULL NULL NULL NULL 0 0 NULL NULL
100
> 1 1 83 NULL
> IX_Patients_MaritalStatusID 12 0 8 NULL NULL NULL NULL NULL 0 2 NULL
> NULL 33 1 3 88 NULL
> IX_Patients_updateAuthorID 13 0 8 NULL NULL NULL NULL NULL 0 2 NULL
> NULL 33 1 3 75 NULL
> IX_Patients_ReligionID 14 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL
> 33 1 3 75 NULL
> IX_Patients_LanguageID 15 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL
> 33 1 3 75 NULL
> IX_Patients_PtRaceID 16 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL
33
> 1 3 75 NULL
> IX_Patients_AuthorID 17 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL
33
> 1 3 75 NULL
> IX_Patients_CtRaceID 39 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL
33
> 1 3 75 NULL
> IX_Patients_FacilityID 40 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL
> 33 1 3 75 NULL
> IX_Patients_OxFinalAuthorDate 49 0 10 NULL NULL NULL NULL NULL 0 4
> NULL NULL 40 2 5 90 NULL
> IX_Patients_HospitalID 50 0 8 NULL NULL NULL NULL NULL 0 3 NULL NULL
> 25 1 4 75 NULL
> tPatients 255 0 194 15709 79 493 91 0 42 41 546 93 60 25 42 99 90
>
> > shrinking process usually results in
> > fragmenting your indexes all over again in the process. A rebuild with
> DBCC
> > DBREINDEX requires about 1.2 times the size of your db in free space to
do
> a
> > proper rebuild.
> >
> > So you always need lots of free space
> > for rebuilds to operate properly.
> Do you mean free space on the hard drive (which on my dev machine I have
> 30gb to spare and on the production I have 4-5x the size of the database)
or
> 'space available' within the database file (like you see when one brings
up
> properties for the database)?
> > If your db is that small I suspect you have a lot of tables or
> > indexes using mixed extents.
> Yes, I have some 90 tables. I don't know what mixed extents means at this
> point.
> > Essentially anything under 8 pages will share
> > as much as 8 different extents for the 8 pages that normally would go
all
> on
> > 1 extent. If you have less than several hundred to a thousand pages in
the
> > table I would not be too concerned with some fragmentation.
> My table with the most rows has about 1500 pages
>
>
> Have a look at
> > these as well:
> >
> >
> > http://www.karaszi.com/SQLServer/info_dont_shrink.asp Shrinking
> > considerations
> >
> >
>
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx
> > Index Defrag Best Practices 2000
> > http://www.sql-server-performance.com/dt_dbcc_showcontig.asp
> > Understanding DBCC SHOWCONTIG
> >
>
http://www.sqlservercentral.com/columnists/jweisbecker/amethodologyfordeterminingfillfactors.asp
> > Fill Factors
> > http://www.sql-server-performance.com/gv_clustered_indexes.asp
> > Clustered Indexes
> >
> >
> >
> > --
> > Andrew J. Kelly SQL MVP
> >
> > "Don Miller" <nospam@.nospam.com> wrote in message
> > news:uUbZgwDyGHA.3512@.TK2MSFTNGP04.phx.gbl...
> > > I've been playing with your script (on my production db moved to my
dev
> > > machine) since I have been very remiss in maintaining my database. I
ran
> > > the
> > > 'defrag' option on the db (all indexes were about 50% fragmented),
then
> I
> > > did a SHRINKDB, and saw the size of my .mdf (and backup files) go from
> > > 1.3GB
> > > to around 200MB!
> > >
> > > Then I read about DBREINDEX and saw that your script handled that too.
> So,
> > > I
> > > ran the rebuild script (on a pristine copy of my prod db without doing
> the
> > > defrag first), shrunk the database again, and again got the same
> reduction
> > > in file size. However, when I ran your script in the report mode AFTER
> the
> > > rebuild and shrink, almost ALL of the files are reported as 75-100%
> > > fragmented!!! Also, there are indexes showing up in the report as
being
> > > 100%
> > > fragmented that I can't even find. They are all named like
> > > NameOfTable.tNameOfTable with a 't' prefix rather than 'IX_' there.
> > >
> > > Is there a reason that after a DBREINDEX all the indexes are reported
as
> > > almost 100% fragmented?
> > >
> > > What are those indexes with a 't' in their name?
> > >
> > > Thanks for any explanation.
> > >
> > >
> > > "Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
> > > news:44EDD97B.5050604@.realsqlguy.com...
> > >> Don Miller wrote:
> > >> > "Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
> > >> > news:44EDCBB5.5040808@.realsqlguy.com...
> > >> >
> > >> >> 1. Run weekly full backups, and frequent transaction log backups
> > > during
> > >> >> the week. I have a script that will do all of this for you:
> > >> >> http://realsqlguy.com/serendipity/archives/11-Send-In-Backup!.html
> > >> >
> > >> > I really can't afford to lose any data, but could settle for a
day's
> > > worth
> > >> > loss, and have little space for hundreds of transaction logs. Was
> > > thinking
> > >> > about complete backups nightly and differential throughout the day.
> > >> >
> > >>
> > >> You can modify my script to make it backup nightly. Find the first
IF
> > >> statement, remove the entire line, and replace it with:
> > >>
> > >> IF GETDATE() - @.dtLastFullBackup > 1
> > >>
> > >> >> 2. Rebuild indexes nightly or weekly, based on their level of
> > >> >> fragmentation. I have a script that I run nightly, rebuilding
only
> > >> >> those that are fragmented beyond 30%:
> > >> >>
> > >> >
> > >
>
http://realsqlguy.com/serendipity/archives/12-Humpty-Dumpty-Sat-On-A-Wall...html
> > >> >
> > >> > I have not ever rebuilt indexes or even know how to check if they
are
> > >> > in
> > >> > trouble (there must be some way to check). My DB only deals with
> > > thousands
> > >> > of rows and so there has been no noticeable degradation in
> performance.
> > >> >
> > >>
> > >> This script will do all of that for you, you just have to schedule it
> to
> > >> run.
> > >>
> > >>
> > >>
> > >> --
> > >> Tracy McKibben
> > >> MCDBA
> > >> http://www.realsqlguy.com
> > >
> > >
> >
> >
>|||Don,
I really can't see where the 99% Logical fragmentation is coming from. I
would try issuing a DBCC DBREINDEX onthe table again and see what it says.
How many files do you have in the Filegroup where this table lives?
--
Andrew J. Kelly SQL MVP
"Don Miller" <nospam@.nospam.com> wrote in message
news:%23Jo$O1EyGHA.2220@.TK2MSFTNGP02.phx.gbl...
> Maybe this is simpler (after DBREINDEX and SHRINKDB):
> DBCC SHOWCONTIG scanning 'PtHxEvents' table...
> Table: 'PtHxEvents' (1366295927); index ID: 1, database ID: 15
> TABLE level scan performed.
> - Pages Scanned........................: 1524
> - Extents Scanned.......................: 194
> - Extent Switches.......................: 194
> - Avg. Pages per Extent..................: 7.9
> - Scan Density [Best Count:Actual Count]......: 97.95% [191:195]
> - Logical Scan Fragmentation ..............: 99.87%
> - Extent Scan Fragmentation ...............: 12.89%
> - Avg. Bytes Free per Page................: 33.2
> - Avg. Page Density (full)................: 99.59%
> DBCC execution completed. If DBCC printed error messages, contact your
> system administrator.
> "Don Miller" <nospam@.nospam.com> wrote in message
> news:uo0mBoEyGHA.356@.TK2MSFTNGP02.phx.gbl...
>> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
>> news:OCSStLEyGHA.3844@.TK2MSFTNGP06.phx.gbl...
>> First of all, I REALLY appreciate your expert help. These remarkable
> changes
>> in size could be that I shunned maintenance over a period of YEARS. It
> could
>> easily be that my DB is in actuality so small (<300mb) that performance
>> issues do not appear and one of the problems was that fragmented indexes
>> were continually growing the size of the database unnecessarily.
>> Also, are the indexes with the small "t" in front of the table name maybe
>> have something to do with clustered indexes?
>> > Can you post some of the actual results of the showcontig?
>> This morning, I started over again after restoring to my dev machine, the
>> last full (simple) backup from production. The first thing I did was do
> the
>> DBREINDEX for all tables in the DB, then a SHRINKDB (all using the Simple
>> recovery model). The .mdf went 1.4gb to 280mb, and the .ldf went to 2mb.
>> Here's the showcontig of one table 'Patients' after all of that happened
> (is
>> there an easier to read way to post this?):
>> IndexName IndexId Level Pages Rows MinimumRecordSize
> MaximumRecordSize
>> AverageRecordSize ForwardedRecords Extents ExtentSwitches
>> AverageFreeBytes
>> AveragePageDenisty ScanDensity BestCount ActualCount LogicalFragmentation
>> ExtentFragmentation
>> PK_Patients 1 0 176 NULL NULL NULL NULL NULL 0 23 NULL NULL 92 22
>> 24
>> 99 NULL
>> IX_Patients_ClinicalEDC 2 0 10 NULL NULL NULL NULL NULL 0 2 NULL
> NULL
>> 67 2 3 90 NULL
>> IX_Patients_FirstName 3 0 11 NULL NULL NULL NULL NULL 0 4 NULL NULL
> 40
>> 2 5 100 NULL
>> IX_Patients_GroupID 4 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL 33
> 1
>> 3 100 NULL
>> IX_Patients_LastName 5 0 11 NULL NULL NULL NULL NULL 0 4 NULL NULL
> 40
>> 2 5 100 NULL
>> IX_Patients_ProviderID 6 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL
> 33
>> 1 3 100 NULL
>> IX_Patients_PersonID 7 0 8 NULL NULL NULL NULL NULL 0 1 NULL NULL
>> 50
> 1
>> 2 100 NULL
>> IX_Patients_LastNameSearch 8 0 11 NULL NULL NULL NULL NULL 0 3 NULL
>> NULL 50 2 4 100 NULL
>> IX_Patients_FirstNameSearch 9 0 11 NULL NULL NULL NULL NULL 0 1
>> NULL
>> NULL 100 2 2 82 NULL
>> IX_Patients_UserName 10 0 6 NULL NULL NULL NULL NULL 0 0 NULL NULL
> 100
>> 1 1 83 NULL
>> IX_Patients_MaritalStatusID 12 0 8 NULL NULL NULL NULL NULL 0 2
>> NULL
>> NULL 33 1 3 88 NULL
>> IX_Patients_updateAuthorID 13 0 8 NULL NULL NULL NULL NULL 0 2 NULL
>> NULL 33 1 3 75 NULL
>> IX_Patients_ReligionID 14 0 8 NULL NULL NULL NULL NULL 0 2 NULL
>> NULL
>> 33 1 3 75 NULL
>> IX_Patients_LanguageID 15 0 8 NULL NULL NULL NULL NULL 0 2 NULL
>> NULL
>> 33 1 3 75 NULL
>> IX_Patients_PtRaceID 16 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL
> 33
>> 1 3 75 NULL
>> IX_Patients_AuthorID 17 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL
> 33
>> 1 3 75 NULL
>> IX_Patients_CtRaceID 39 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL
> 33
>> 1 3 75 NULL
>> IX_Patients_FacilityID 40 0 8 NULL NULL NULL NULL NULL 0 2 NULL
>> NULL
>> 33 1 3 75 NULL
>> IX_Patients_OxFinalAuthorDate 49 0 10 NULL NULL NULL NULL NULL 0 4
>> NULL NULL 40 2 5 90 NULL
>> IX_Patients_HospitalID 50 0 8 NULL NULL NULL NULL NULL 0 3 NULL
>> NULL
>> 25 1 4 75 NULL
>> tPatients 255 0 194 15709 79 493 91 0 42 41 546 93 60 25 42 99 90
>>
>> > shrinking process usually results in
>> > fragmenting your indexes all over again in the process. A rebuild with
>> DBCC
>> > DBREINDEX requires about 1.2 times the size of your db in free space to
> do
>> a
>> > proper rebuild.
>> >
>> > So you always need lots of free space
>> > for rebuilds to operate properly.
>> Do you mean free space on the hard drive (which on my dev machine I have
>> 30gb to spare and on the production I have 4-5x the size of the database)
> or
>> 'space available' within the database file (like you see when one brings
> up
>> properties for the database)?
>> > If your db is that small I suspect you have a lot of tables or
>> > indexes using mixed extents.
>> Yes, I have some 90 tables. I don't know what mixed extents means at this
>> point.
>> > Essentially anything under 8 pages will share
>> > as much as 8 different extents for the 8 pages that normally would go
> all
>> on
>> > 1 extent. If you have less than several hundred to a thousand pages in
> the
>> > table I would not be too concerned with some fragmentation.
>> My table with the most rows has about 1500 pages
>>
>>
>> Have a look at
>> > these as well:
>> >
>> >
>> > http://www.karaszi.com/SQLServer/info_dont_shrink.asp Shrinking
>> > considerations
>> >
>> >
> http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx
>> > Index Defrag Best Practices 2000
>> > http://www.sql-server-performance.com/dt_dbcc_showcontig.asp
>> > Understanding DBCC SHOWCONTIG
>> >
> http://www.sqlservercentral.com/columnists/jweisbecker/amethodologyfordeterminingfillfactors.asp
>> > Fill Factors
>> > http://www.sql-server-performance.com/gv_clustered_indexes.asp
>> > Clustered Indexes
>> >
>> >
>> >
>> > --
>> > Andrew J. Kelly SQL MVP
>> >
>> > "Don Miller" <nospam@.nospam.com> wrote in message
>> > news:uUbZgwDyGHA.3512@.TK2MSFTNGP04.phx.gbl...
>> > > I've been playing with your script (on my production db moved to my
> dev
>> > > machine) since I have been very remiss in maintaining my database. I
> ran
>> > > the
>> > > 'defrag' option on the db (all indexes were about 50% fragmented),
> then
>> I
>> > > did a SHRINKDB, and saw the size of my .mdf (and backup files) go
>> > > from
>> > > 1.3GB
>> > > to around 200MB!
>> > >
>> > > Then I read about DBREINDEX and saw that your script handled that
>> > > too.
>> So,
>> > > I
>> > > ran the rebuild script (on a pristine copy of my prod db without
>> > > doing
>> the
>> > > defrag first), shrunk the database again, and again got the same
>> reduction
>> > > in file size. However, when I ran your script in the report mode
>> > > AFTER
>> the
>> > > rebuild and shrink, almost ALL of the files are reported as 75-100%
>> > > fragmented!!! Also, there are indexes showing up in the report as
> being
>> > > 100%
>> > > fragmented that I can't even find. They are all named like
>> > > NameOfTable.tNameOfTable with a 't' prefix rather than 'IX_' there.
>> > >
>> > > Is there a reason that after a DBREINDEX all the indexes are reported
> as
>> > > almost 100% fragmented?
>> > >
>> > > What are those indexes with a 't' in their name?
>> > >
>> > > Thanks for any explanation.
>> > >
>> > >
>> > > "Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
>> > > news:44EDD97B.5050604@.realsqlguy.com...
>> > >> Don Miller wrote:
>> > >> > "Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
>> > >> > news:44EDCBB5.5040808@.realsqlguy.com...
>> > >> >
>> > >> >> 1. Run weekly full backups, and frequent transaction log backups
>> > > during
>> > >> >> the week. I have a script that will do all of this for you:
>> > >> >> http://realsqlguy.com/serendipity/archives/11-Send-In-Backup!.html
>> > >> >
>> > >> > I really can't afford to lose any data, but could settle for a
> day's
>> > > worth
>> > >> > loss, and have little space for hundreds of transaction logs. Was
>> > > thinking
>> > >> > about complete backups nightly and differential throughout the
>> > >> > day.
>> > >> >
>> > >>
>> > >> You can modify my script to make it backup nightly. Find the first
> IF
>> > >> statement, remove the entire line, and replace it with:
>> > >>
>> > >> IF GETDATE() - @.dtLastFullBackup > 1
>> > >>
>> > >> >> 2. Rebuild indexes nightly or weekly, based on their level of
>> > >> >> fragmentation. I have a script that I run nightly, rebuilding
> only
>> > >> >> those that are fragmented beyond 30%:
>> > >> >>
>> > >> >
>> > >
> http://realsqlguy.com/serendipity/archives/12-Humpty-Dumpty-Sat-On-A-Wall...html
>> > >> >
>> > >> > I have not ever rebuilt indexes or even know how to check if they
> are
>> > >> > in
>> > >> > trouble (there must be some way to check). My DB only deals with
>> > > thousands
>> > >> > of rows and so there has been no noticeable degradation in
>> performance.
>> > >> >
>> > >>
>> > >> This script will do all of that for you, you just have to schedule
>> > >> it
>> to
>> > >> run.
>> > >>
>> > >>
>> > >>
>> > >> --
>> > >> Tracy McKibben
>> > >> MCDBA
>> > >> http://www.realsqlguy.com
>> > >
>> > >
>> >
>> >
>>
>|||Don,
I can't tell what those indexes with the t are but most likely they are
statistics and not indexes. But for this Patients table I would get rid of
the unused indexes because boy you sure have a lot of them<g>. If the index
has a very low selectivity (meaning there are a lot of the same values) the
nonclustered index is most likely not even being used. I don't have the
table DDL so this is a guess but columns such as Race, Religion, marital
status and even HospitalID probably have a lot of rows that match the same
value. For instance let take Marital Status. I bet most match 30 or more %
of the rows. A nonclustered index for a simple lookup is usually useless
after just a few %, sometimes as low as 2 or 3% even. What about the first &
last name indexes? There is a first & last name search index as well.
What is the difference between the two? Getting rid of unused indexes may
free up some room as well.
And by free space I mean in the data file not the hard drive. The first link
I sent should clear that up for you.
--
Andrew J. Kelly SQL MVP
"Don Miller" <nospam@.nospam.com> wrote in message
news:uo0mBoEyGHA.356@.TK2MSFTNGP02.phx.gbl...
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:OCSStLEyGHA.3844@.TK2MSFTNGP06.phx.gbl...
> First of all, I REALLY appreciate your expert help. These remarkable
> changes
> in size could be that I shunned maintenance over a period of YEARS. It
> could
> easily be that my DB is in actuality so small (<300mb) that performance
> issues do not appear and one of the problems was that fragmented indexes
> were continually growing the size of the database unnecessarily.
> Also, are the indexes with the small "t" in front of the table name maybe
> have something to do with clustered indexes?
>> Can you post some of the actual results of the showcontig?
> This morning, I started over again after restoring to my dev machine, the
> last full (simple) backup from production. The first thing I did was do
> the
> DBREINDEX for all tables in the DB, then a SHRINKDB (all using the Simple
> recovery model). The .mdf went 1.4gb to 280mb, and the .ldf went to 2mb.
> Here's the showcontig of one table 'Patients' after all of that happened
> (is
> there an easier to read way to post this?):
> IndexName IndexId Level Pages Rows MinimumRecordSize
> MaximumRecordSize
> AverageRecordSize ForwardedRecords Extents ExtentSwitches AverageFreeBytes
> AveragePageDenisty ScanDensity BestCount ActualCount LogicalFragmentation
> ExtentFragmentation
> PK_Patients 1 0 176 NULL NULL NULL NULL NULL 0 23 NULL NULL 92 22 24
> 99 NULL
> IX_Patients_ClinicalEDC 2 0 10 NULL NULL NULL NULL NULL 0 2 NULL NULL
> 67 2 3 90 NULL
> IX_Patients_FirstName 3 0 11 NULL NULL NULL NULL NULL 0 4 NULL NULL
> 40
> 2 5 100 NULL
> IX_Patients_GroupID 4 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL 33 1
> 3 100 NULL
> IX_Patients_LastName 5 0 11 NULL NULL NULL NULL NULL 0 4 NULL NULL 40
> 2 5 100 NULL
> IX_Patients_ProviderID 6 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL
> 33
> 1 3 100 NULL
> IX_Patients_PersonID 7 0 8 NULL NULL NULL NULL NULL 0 1 NULL NULL 50
> 1
> 2 100 NULL
> IX_Patients_LastNameSearch 8 0 11 NULL NULL NULL NULL NULL 0 3 NULL
> NULL 50 2 4 100 NULL
> IX_Patients_FirstNameSearch 9 0 11 NULL NULL NULL NULL NULL 0 1 NULL
> NULL 100 2 2 82 NULL
> IX_Patients_UserName 10 0 6 NULL NULL NULL NULL NULL 0 0 NULL NULL
> 100
> 1 1 83 NULL
> IX_Patients_MaritalStatusID 12 0 8 NULL NULL NULL NULL NULL 0 2 NULL
> NULL 33 1 3 88 NULL
> IX_Patients_updateAuthorID 13 0 8 NULL NULL NULL NULL NULL 0 2 NULL
> NULL 33 1 3 75 NULL
> IX_Patients_ReligionID 14 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL
> 33 1 3 75 NULL
> IX_Patients_LanguageID 15 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL
> 33 1 3 75 NULL
> IX_Patients_PtRaceID 16 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL 33
> 1 3 75 NULL
> IX_Patients_AuthorID 17 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL 33
> 1 3 75 NULL
> IX_Patients_CtRaceID 39 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL 33
> 1 3 75 NULL
> IX_Patients_FacilityID 40 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL
> 33 1 3 75 NULL
> IX_Patients_OxFinalAuthorDate 49 0 10 NULL NULL NULL NULL NULL 0 4
> NULL NULL 40 2 5 90 NULL
> IX_Patients_HospitalID 50 0 8 NULL NULL NULL NULL NULL 0 3 NULL NULL
> 25 1 4 75 NULL
> tPatients 255 0 194 15709 79 493 91 0 42 41 546 93 60 25 42 99 90
>
>> shrinking process usually results in
>> fragmenting your indexes all over again in the process. A rebuild with
> DBCC
>> DBREINDEX requires about 1.2 times the size of your db in free space to
>> do
> a
>> proper rebuild.
>> So you always need lots of free space
>> for rebuilds to operate properly.
> Do you mean free space on the hard drive (which on my dev machine I have
> 30gb to spare and on the production I have 4-5x the size of the database)
> or
> 'space available' within the database file (like you see when one brings
> up
> properties for the database)?
>> If your db is that small I suspect you have a lot of tables or
>> indexes using mixed extents.
> Yes, I have some 90 tables. I don't know what mixed extents means at this
> point.
>> Essentially anything under 8 pages will share
>> as much as 8 different extents for the 8 pages that normally would go all
> on
>> 1 extent. If you have less than several hundred to a thousand pages in
>> the
>> table I would not be too concerned with some fragmentation.
> My table with the most rows has about 1500 pages
>
>
> Have a look at
>> these as well:
>>
>> http://www.karaszi.com/SQLServer/info_dont_shrink.asp Shrinking
>> considerations
>>
> http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx
>> Index Defrag Best Practices 2000
>> http://www.sql-server-performance.com/dt_dbcc_showcontig.asp
>> Understanding DBCC SHOWCONTIG
> http://www.sqlservercentral.com/columnists/jweisbecker/amethodologyfordeterminingfillfactors.asp
>> Fill Factors
>> http://www.sql-server-performance.com/gv_clustered_indexes.asp
>> Clustered Indexes
>>
>> --
>> Andrew J. Kelly SQL MVP
>> "Don Miller" <nospam@.nospam.com> wrote in message
>> news:uUbZgwDyGHA.3512@.TK2MSFTNGP04.phx.gbl...
>> > I've been playing with your script (on my production db moved to my dev
>> > machine) since I have been very remiss in maintaining my database. I
>> > ran
>> > the
>> > 'defrag' option on the db (all indexes were about 50% fragmented), then
> I
>> > did a SHRINKDB, and saw the size of my .mdf (and backup files) go from
>> > 1.3GB
>> > to around 200MB!
>> >
>> > Then I read about DBREINDEX and saw that your script handled that too.
> So,
>> > I
>> > ran the rebuild script (on a pristine copy of my prod db without doing
> the
>> > defrag first), shrunk the database again, and again got the same
> reduction
>> > in file size. However, when I ran your script in the report mode AFTER
> the
>> > rebuild and shrink, almost ALL of the files are reported as 75-100%
>> > fragmented!!! Also, there are indexes showing up in the report as being
>> > 100%
>> > fragmented that I can't even find. They are all named like
>> > NameOfTable.tNameOfTable with a 't' prefix rather than 'IX_' there.
>> >
>> > Is there a reason that after a DBREINDEX all the indexes are reported
>> > as
>> > almost 100% fragmented?
>> >
>> > What are those indexes with a 't' in their name?
>> >
>> > Thanks for any explanation.
>> >
>> >
>> > "Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
>> > news:44EDD97B.5050604@.realsqlguy.com...
>> >> Don Miller wrote:
>> >> > "Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
>> >> > news:44EDCBB5.5040808@.realsqlguy.com...
>> >> >
>> >> >> 1. Run weekly full backups, and frequent transaction log backups
>> > during
>> >> >> the week. I have a script that will do all of this for you:
>> >> >> http://realsqlguy.com/serendipity/archives/11-Send-In-Backup!.html
>> >> >
>> >> > I really can't afford to lose any data, but could settle for a day's
>> > worth
>> >> > loss, and have little space for hundreds of transaction logs. Was
>> > thinking
>> >> > about complete backups nightly and differential throughout the day.
>> >> >
>> >>
>> >> You can modify my script to make it backup nightly. Find the first IF
>> >> statement, remove the entire line, and replace it with:
>> >>
>> >> IF GETDATE() - @.dtLastFullBackup > 1
>> >>
>> >> >> 2. Rebuild indexes nightly or weekly, based on their level of
>> >> >> fragmentation. I have a script that I run nightly, rebuilding only
>> >> >> those that are fragmented beyond 30%:
>> >> >>
>> >> >
>> >
> http://realsqlguy.com/serendipity/archives/12-Humpty-Dumpty-Sat-On-A-Wall...html
>> >> >
>> >> > I have not ever rebuilt indexes or even know how to check if they
>> >> > are
>> >> > in
>> >> > trouble (there must be some way to check). My DB only deals with
>> > thousands
>> >> > of rows and so there has been no noticeable degradation in
> performance.
>> >> >
>> >>
>> >> This script will do all of that for you, you just have to schedule it
> to
>> >> run.
>> >>
>> >>
>> >>
>> >> --
>> >> Tracy McKibben
>> >> MCDBA
>> >> http://www.realsqlguy.com
>> >
>> >
>>
>|||Again, what I did was a DBREINDEX on a database that had never had this done
before (using Tracy McKibben's script to rebuild all indexes - recovered a
LOT of pages or potential disk space), then SHRINKDB (recovered the file
space on the hard drive), and then I found the very high Logical Scan
Fragmentation (after the shrink). I then did a defrag of all tables and
indices using INDEXDEFRAG (using the script from BOL). Then for the same
example table that I did a SHOWCONTIG before (below) I got this new report
(and it looks pretty good to me as far as scan density, fragmentation,
etc.). So, I *think* I'm OK with a final db size one-sixth the size of what
I had before which already makes backups/restores about six times faster and
my backup of db backup files to a remote online service faster..
DBCC SHOWCONTIG scanning 'PtHxEvents' table...
Table: 'PtHxEvents' (1366295927); index ID: 1, database ID: 15
TABLE level scan performed.
- Pages Scanned........................: 1524
- Extents Scanned.......................: 194
- Extent Switches.......................: 193
- Avg. Pages per Extent..................: 7.9
- Scan Density [Best Count:Actual Count]......: 98.45% [191:194]
- Logical Scan Fragmentation ..............: 0.52%
- Extent Scan Fragmentation ...............: 12.89%
- Avg. Bytes Free per Page................: 33.2
- Avg. Page Density (full)................: 99.59%
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.
> How many files do you have in the Filegroup where this table lives?
I wouldn't even know where to look.
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:ec85Z3FyGHA.3492@.TK2MSFTNGP02.phx.gbl...
> Don,
> I really can't see where the 99% Logical fragmentation is coming from. I
> would try issuing a DBCC DBREINDEX onthe table again and see what it says.
> How many files do you have in the Filegroup where this table lives?
> --
> Andrew J. Kelly SQL MVP
> "Don Miller" <nospam@.nospam.com> wrote in message
> news:%23Jo$O1EyGHA.2220@.TK2MSFTNGP02.phx.gbl...
> > Maybe this is simpler (after DBREINDEX and SHRINKDB):
> >
> > DBCC SHOWCONTIG scanning 'PtHxEvents' table...
> > Table: 'PtHxEvents' (1366295927); index ID: 1, database ID: 15
> > TABLE level scan performed.
> > - Pages Scanned........................: 1524
> > - Extents Scanned.......................: 194
> > - Extent Switches.......................: 194
> > - Avg. Pages per Extent..................: 7.9
> > - Scan Density [Best Count:Actual Count]......: 97.95% [191:195]
> > - Logical Scan Fragmentation ..............: 99.87%
> > - Extent Scan Fragmentation ...............: 12.89%
> > - Avg. Bytes Free per Page................: 33.2
> > - Avg. Page Density (full)................: 99.59%
> > DBCC execution completed. If DBCC printed error messages, contact your
> > system administrator.
> >
> > "Don Miller" <nospam@.nospam.com> wrote in message
> > news:uo0mBoEyGHA.356@.TK2MSFTNGP02.phx.gbl...
> >> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> >> news:OCSStLEyGHA.3844@.TK2MSFTNGP06.phx.gbl...
> >>
> >> First of all, I REALLY appreciate your expert help. These remarkable
> > changes
> >> in size could be that I shunned maintenance over a period of YEARS. It
> > could
> >> easily be that my DB is in actuality so small (<300mb) that performance
> >> issues do not appear and one of the problems was that fragmented
indexes
> >> were continually growing the size of the database unnecessarily.
> >>
> >> Also, are the indexes with the small "t" in front of the table name
maybe
> >> have something to do with clustered indexes?
> >>
> >> > Can you post some of the actual results of the showcontig?
> >>
> >> This morning, I started over again after restoring to my dev machine,
the
> >> last full (simple) backup from production. The first thing I did was do
> > the
> >> DBREINDEX for all tables in the DB, then a SHRINKDB (all using the
Simple
> >> recovery model). The .mdf went 1.4gb to 280mb, and the .ldf went to
2mb.
> >> Here's the showcontig of one table 'Patients' after all of that
happened
> > (is
> >> there an easier to read way to post this?):
> >>
> >> IndexName IndexId Level Pages Rows MinimumRecordSize
> > MaximumRecordSize
> >> AverageRecordSize ForwardedRecords Extents ExtentSwitches
> >> AverageFreeBytes
> >> AveragePageDenisty ScanDensity BestCount ActualCount
LogicalFragmentation
> >> ExtentFragmentation
> >> PK_Patients 1 0 176 NULL NULL NULL NULL NULL 0 23 NULL NULL 92 22
> >> 24
> >> 99 NULL
> >> IX_Patients_ClinicalEDC 2 0 10 NULL NULL NULL NULL NULL 0 2 NULL
> > NULL
> >> 67 2 3 90 NULL
> >> IX_Patients_FirstName 3 0 11 NULL NULL NULL NULL NULL 0 4 NULL
NULL
> > 40
> >> 2 5 100 NULL
> >> IX_Patients_GroupID 4 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL
33
> > 1
> >> 3 100 NULL
> >> IX_Patients_LastName 5 0 11 NULL NULL NULL NULL NULL 0 4 NULL
NULL
> > 40
> >> 2 5 100 NULL
> >> IX_Patients_ProviderID 6 0 8 NULL NULL NULL NULL NULL 0 2 NULL
NULL
> > 33
> >> 1 3 100 NULL
> >> IX_Patients_PersonID 7 0 8 NULL NULL NULL NULL NULL 0 1 NULL NULL
> >> 50
> > 1
> >> 2 100 NULL
> >> IX_Patients_LastNameSearch 8 0 11 NULL NULL NULL NULL NULL 0 3
NULL
> >> NULL 50 2 4 100 NULL
> >> IX_Patients_FirstNameSearch 9 0 11 NULL NULL NULL NULL NULL 0 1
> >> NULL
> >> NULL 100 2 2 82 NULL
> >> IX_Patients_UserName 10 0 6 NULL NULL NULL NULL NULL 0 0 NULL
NULL
> > 100
> >> 1 1 83 NULL
> >> IX_Patients_MaritalStatusID 12 0 8 NULL NULL NULL NULL NULL 0 2
> >> NULL
> >> NULL 33 1 3 88 NULL
> >> IX_Patients_updateAuthorID 13 0 8 NULL NULL NULL NULL NULL 0 2
NULL
> >> NULL 33 1 3 75 NULL
> >> IX_Patients_ReligionID 14 0 8 NULL NULL NULL NULL NULL 0 2 NULL
> >> NULL
> >> 33 1 3 75 NULL
> >> IX_Patients_LanguageID 15 0 8 NULL NULL NULL NULL NULL 0 2 NULL
> >> NULL
> >> 33 1 3 75 NULL
> >> IX_Patients_PtRaceID 16 0 8 NULL NULL NULL NULL NULL 0 2 NULL
NULL
> > 33
> >> 1 3 75 NULL
> >> IX_Patients_AuthorID 17 0 8 NULL NULL NULL NULL NULL 0 2 NULL
NULL
> > 33
> >> 1 3 75 NULL
> >> IX_Patients_CtRaceID 39 0 8 NULL NULL NULL NULL NULL 0 2 NULL
NULL
> > 33
> >> 1 3 75 NULL
> >> IX_Patients_FacilityID 40 0 8 NULL NULL NULL NULL NULL 0 2 NULL
> >> NULL
> >> 33 1 3 75 NULL
> >> IX_Patients_OxFinalAuthorDate 49 0 10 NULL NULL NULL NULL NULL 0
4
> >> NULL NULL 40 2 5 90 NULL
> >> IX_Patients_HospitalID 50 0 8 NULL NULL NULL NULL NULL 0 3 NULL
> >> NULL
> >> 25 1 4 75 NULL
> >> tPatients 255 0 194 15709 79 493 91 0 42 41 546 93 60 25 42 99 90
> >>
> >>
> >> > shrinking process usually results in
> >> > fragmenting your indexes all over again in the process. A rebuild
with
> >> DBCC
> >> > DBREINDEX requires about 1.2 times the size of your db in free space
to
> > do
> >> a
> >> > proper rebuild.
> >> >
> >> > So you always need lots of free space
> >> > for rebuilds to operate properly.
> >>
> >> Do you mean free space on the hard drive (which on my dev machine I
have
> >> 30gb to spare and on the production I have 4-5x the size of the
database)
> > or
> >> 'space available' within the database file (like you see when one
brings
> > up
> >> properties for the database)?
> >>
> >> > If your db is that small I suspect you have a lot of tables or
> >> > indexes using mixed extents.
> >>
> >> Yes, I have some 90 tables. I don't know what mixed extents means at
this
> >> point.
> >>
> >> > Essentially anything under 8 pages will share
> >> > as much as 8 different extents for the 8 pages that normally would go
> > all
> >> on
> >> > 1 extent. If you have less than several hundred to a thousand pages
in
> > the
> >> > table I would not be too concerned with some fragmentation.
> >>
> >> My table with the most rows has about 1500 pages
> >>
> >>
> >>
> >>
> >>
> >> Have a look at
> >> > these as well:
> >> >
> >> >
> >> > http://www.karaszi.com/SQLServer/info_dont_shrink.asp Shrinking
> >> > considerations
> >> >
> >> >
> >>
> >
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx
> >> > Index Defrag Best Practices 2000
> >> > http://www.sql-server-performance.com/dt_dbcc_showcontig.asp
> >> > Understanding DBCC SHOWCONTIG
> >> >
> >>
> >
http://www.sqlservercentral.com/columnists/jweisbecker/amethodologyfordeterminingfillfactors.asp
> >> > Fill Factors
> >> > http://www.sql-server-performance.com/gv_clustered_indexes.asp
> >> > Clustered Indexes
> >> >
> >> >
> >> >
> >> > --
> >> > Andrew J. Kelly SQL MVP
> >> >
> >> > "Don Miller" <nospam@.nospam.com> wrote in message
> >> > news:uUbZgwDyGHA.3512@.TK2MSFTNGP04.phx.gbl...
> >> > > I've been playing with your script (on my production db moved to my
> > dev
> >> > > machine) since I have been very remiss in maintaining my database.
I
> > ran
> >> > > the
> >> > > 'defrag' option on the db (all indexes were about 50% fragmented),
> > then
> >> I
> >> > > did a SHRINKDB, and saw the size of my .mdf (and backup files) go
> >> > > from
> >> > > 1.3GB
> >> > > to around 200MB!
> >> > >
> >> > > Then I read about DBREINDEX and saw that your script handled that
> >> > > too.
> >> So,
> >> > > I
> >> > > ran the rebuild script (on a pristine copy of my prod db without
> >> > > doing
> >> the
> >> > > defrag first), shrunk the database again, and again got the same
> >> reduction
> >> > > in file size. However, when I ran your script in the report mode
> >> > > AFTER
> >> the
> >> > > rebuild and shrink, almost ALL of the files are reported as 75-100%
> >> > > fragmented!!! Also, there are indexes showing up in the report as
> > being
> >> > > 100%
> >> > > fragmented that I can't even find. They are all named like
> >> > > NameOfTable.tNameOfTable with a 't' prefix rather than 'IX_' there.
> >> > >
> >> > > Is there a reason that after a DBREINDEX all the indexes are
reported
> > as
> >> > > almost 100% fragmented?
> >> > >
> >> > > What are those indexes with a 't' in their name?
> >> > >
> >> > > Thanks for any explanation.
> >> > >
> >> > >
> >> > > "Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
> >> > > news:44EDD97B.5050604@.realsqlguy.com...
> >> > >> Don Miller wrote:
> >> > >> > "Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
> >> > >> > news:44EDCBB5.5040808@.realsqlguy.com...
> >> > >> >
> >> > >> >> 1. Run weekly full backups, and frequent transaction log
backups
> >> > > during
> >> > >> >> the week. I have a script that will do all of this for you:
> >> > >> >>
http://realsqlguy.com/serendipity/archives/11-Send-In-Backup!.html
> >> > >> >
> >> > >> > I really can't afford to lose any data, but could settle for a
> > day's
> >> > > worth
> >> > >> > loss, and have little space for hundreds of transaction logs.
Was
> >> > > thinking
> >> > >> > about complete backups nightly and differential throughout the
> >> > >> > day.
> >> > >> >
> >> > >>
> >> > >> You can modify my script to make it backup nightly. Find the
first
> > IF
> >> > >> statement, remove the entire line, and replace it with:
> >> > >>
> >> > >> IF GETDATE() - @.dtLastFullBackup > 1
> >> > >>
> >> > >> >> 2. Rebuild indexes nightly or weekly, based on their level of
> >> > >> >> fragmentation. I have a script that I run nightly, rebuilding
> > only
> >> > >> >> those that are fragmented beyond 30%:
> >> > >> >>
> >> > >> >
> >> > >
> >>
> >
http://realsqlguy.com/serendipity/archives/12-Humpty-Dumpty-Sat-On-A-Wall...html
> >> > >> >
> >> > >> > I have not ever rebuilt indexes or even know how to check if
they
> > are
> >> > >> > in
> >> > >> > trouble (there must be some way to check). My DB only deals with
> >> > > thousands
> >> > >> > of rows and so there has been no noticeable degradation in
> >> performance.
> >> > >> >
> >> > >>
> >> > >> This script will do all of that for you, you just have to schedule
> >> > >> it
> >> to
> >> > >> run.
> >> > >>
> >> > >>
> >> > >>
> >> > >> --
> >> > >> Tracy McKibben
> >> > >> MCDBA
> >> > >> http://www.realsqlguy.com
> >> > >
> >> > >
> >> >
> >> >
> >>
> >>
> >
> >
>|||Again, I appreciate your comments.
> But for this Patients table I would get rid of
> the unused indexes because boy you sure have a lot of them<g>. If the
index
> has a very low selectivity (meaning there are a lot of the same values)
the
> nonclustered index is most likely not even being used.
I *think* the reason I have so many indices is that I read somewhere at some
time by someone I trusted, that if a column is a foreign key EVER used in a
join to the primary key table (or in a WHERE clause), it should be indexed.
If I misread or misunderstood that premise than I do have a LOT of possibly
unnecessary indices.
> I don't have the
> table DDL so this is a guess but columns such as Race, Religion, marital
> status and even HospitalID probably have a lot of rows that match the same
> value.
Yes.
> For instance let take Marital Status. I bet most match 30 or more %
> of the rows. A nonclustered index for a simple lookup is usually useless
> after just a few %, sometimes as low as 2 or 3% even.
If I were looking for all patients who are married (linked to a primary key
table with an ID) and maybe are Caucasian, why wouldn't indexing all
patients by marital status (and race) in the Patient table be faster for
that query?
> What about the first &
> last name indexes? There is a first & last name search index as well.
> What is the difference between the two?
We actually encrypt all patient-identifying information like first and last
name, and since searching by patient name is the most frequently used
function in our application, we essentially denormalized the search strings
rather than do that on the fly with encrypting and decrypting (using
asynchronous calls to the server with Ajax).
> And by free space I mean in the data file not the hard drive. The first
link
> I sent should clear that up for you.
When I look at db properties after all of this reindexing and shrinking it
says available space is like close to 0. I have used the defaults for the
database to grow automatically by 10%.
> --
> Andrew J. Kelly SQL MVP
> "Don Miller" <nospam@.nospam.com> wrote in message
> news:uo0mBoEyGHA.356@.TK2MSFTNGP02.phx.gbl...
> > "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> > news:OCSStLEyGHA.3844@.TK2MSFTNGP06.phx.gbl...
> >
> > First of all, I REALLY appreciate your expert help. These remarkable
> > changes
> > in size could be that I shunned maintenance over a period of YEARS. It
> > could
> > easily be that my DB is in actuality so small (<300mb) that performance
> > issues do not appear and one of the problems was that fragmented indexes
> > were continually growing the size of the database unnecessarily.
> >
> > Also, are the indexes with the small "t" in front of the table name
maybe
> > have something to do with clustered indexes?
> >
> >> Can you post some of the actual results of the showcontig?
> >
> > This morning, I started over again after restoring to my dev machine,
the
> > last full (simple) backup from production. The first thing I did was do
> > the
> > DBREINDEX for all tables in the DB, then a SHRINKDB (all using the
Simple
> > recovery model). The .mdf went 1.4gb to 280mb, and the .ldf went to 2mb.
> > Here's the showcontig of one table 'Patients' after all of that happened
> > (is
> > there an easier to read way to post this?):
> >
> > IndexName IndexId Level Pages Rows MinimumRecordSize
> > MaximumRecordSize
> > AverageRecordSize ForwardedRecords Extents ExtentSwitches
AverageFreeBytes
> > AveragePageDenisty ScanDensity BestCount ActualCount
LogicalFragmentation
> > ExtentFragmentation
> > PK_Patients 1 0 176 NULL NULL NULL NULL NULL 0 23 NULL NULL 92 22
24
> > 99 NULL
> > IX_Patients_ClinicalEDC 2 0 10 NULL NULL NULL NULL NULL 0 2 NULL
NULL
> > 67 2 3 90 NULL
> > IX_Patients_FirstName 3 0 11 NULL NULL NULL NULL NULL 0 4 NULL NULL
> > 40
> > 2 5 100 NULL
> > IX_Patients_GroupID 4 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL 33
1
> > 3 100 NULL
> > IX_Patients_LastName 5 0 11 NULL NULL NULL NULL NULL 0 4 NULL NULL
40
> > 2 5 100 NULL
> > IX_Patients_ProviderID 6 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL
> > 33
> > 1 3 100 NULL
> > IX_Patients_PersonID 7 0 8 NULL NULL NULL NULL NULL 0 1 NULL NULL
50
> > 1
> > 2 100 NULL
> > IX_Patients_LastNameSearch 8 0 11 NULL NULL NULL NULL NULL 0 3 NULL
> > NULL 50 2 4 100 NULL
> > IX_Patients_FirstNameSearch 9 0 11 NULL NULL NULL NULL NULL 0 1
NULL
> > NULL 100 2 2 82 NULL
> > IX_Patients_UserName 10 0 6 NULL NULL NULL NULL NULL 0 0 NULL NULL
> > 100
> > 1 1 83 NULL
> > IX_Patients_MaritalStatusID 12 0 8 NULL NULL NULL NULL NULL 0 2
NULL
> > NULL 33 1 3 88 NULL
> > IX_Patients_updateAuthorID 13 0 8 NULL NULL NULL NULL NULL 0 2 NULL
> > NULL 33 1 3 75 NULL
> > IX_Patients_ReligionID 14 0 8 NULL NULL NULL NULL NULL 0 2 NULL
NULL
> > 33 1 3 75 NULL
> > IX_Patients_LanguageID 15 0 8 NULL NULL NULL NULL NULL 0 2 NULL
NULL
> > 33 1 3 75 NULL
> > IX_Patients_PtRaceID 16 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL
33
> > 1 3 75 NULL
> > IX_Patients_AuthorID 17 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL
33
> > 1 3 75 NULL
> > IX_Patients_CtRaceID 39 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL
33
> > 1 3 75 NULL
> > IX_Patients_FacilityID 40 0 8 NULL NULL NULL NULL NULL 0 2 NULL
NULL
> > 33 1 3 75 NULL
> > IX_Patients_OxFinalAuthorDate 49 0 10 NULL NULL NULL NULL NULL 0 4
> > NULL NULL 40 2 5 90 NULL
> > IX_Patients_HospitalID 50 0 8 NULL NULL NULL NULL NULL 0 3 NULL
NULL
> > 25 1 4 75 NULL
> > tPatients 255 0 194 15709 79 493 91 0 42 41 546 93 60 25 42 99 90
> >
> >
> >> shrinking process usually results in
> >> fragmenting your indexes all over again in the process. A rebuild with
> > DBCC
> >> DBREINDEX requires about 1.2 times the size of your db in free space to
> >> do
> > a
> >> proper rebuild.
> >>
> >> So you always need lots of free space
> >> for rebuilds to operate properly.
> >
> > Do you mean free space on the hard drive (which on my dev machine I have
> > 30gb to spare and on the production I have 4-5x the size of the
database)
> > or
> > 'space available' within the database file (like you see when one brings
> > up
> > properties for the database)?
> >
> >> If your db is that small I suspect you have a lot of tables or
> >> indexes using mixed extents.
> >
> > Yes, I have some 90 tables. I don't know what mixed extents means at
this
> > point.
> >
> >> Essentially anything under 8 pages will share
> >> as much as 8 different extents for the 8 pages that normally would go
all
> > on
> >> 1 extent. If you have less than several hundred to a thousand pages in
> >> the
> >> table I would not be too concerned with some fragmentation.
> >
> > My table with the most rows has about 1500 pages
> >
> >
> >
> >
> >
> > Have a look at
> >> these as well:
> >>
> >>
> >> http://www.karaszi.com/SQLServer/info_dont_shrink.asp Shrinking
> >> considerations
> >>
> >>
> >
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx
> >> Index Defrag Best Practices 2000
> >> http://www.sql-server-performance.com/dt_dbcc_showcontig.asp
> >> Understanding DBCC SHOWCONTIG
> >>
> >
http://www.sqlservercentral.com/columnists/jweisbecker/amethodologyfordeterminingfillfactors.asp
> >> Fill Factors
> >> http://www.sql-server-performance.com/gv_clustered_indexes.asp
> >> Clustered Indexes
> >>
> >>
> >>
> >> --
> >> Andrew J. Kelly SQL MVP
> >>
> >> "Don Miller" <nospam@.nospam.com> wrote in message
> >> news:uUbZgwDyGHA.3512@.TK2MSFTNGP04.phx.gbl...
> >> > I've been playing with your script (on my production db moved to my
dev
> >> > machine) since I have been very remiss in maintaining my database. I
> >> > ran
> >> > the
> >> > 'defrag' option on the db (all indexes were about 50% fragmented),
then
> > I
> >> > did a SHRINKDB, and saw the size of my .mdf (and backup files) go
from
> >> > 1.3GB
> >> > to around 200MB!
> >> >
> >> > Then I read about DBREINDEX and saw that your script handled that
too.
> > So,
> >> > I
> >> > ran the rebuild script (on a pristine copy of my prod db without
doing
> > the
> >> > defrag first), shrunk the database again, and again got the same
> > reduction
> >> > in file size. However, when I ran your script in the report mode
AFTER
> > the
> >> > rebuild and shrink, almost ALL of the files are reported as 75-100%
> >> > fragmented!!! Also, there are indexes showing up in the report as
being
> >> > 100%
> >> > fragmented that I can't even find. They are all named like
> >> > NameOfTable.tNameOfTable with a 't' prefix rather than 'IX_' there.
> >> >
> >> > Is there a reason that after a DBREINDEX all the indexes are reported
> >> > as
> >> > almost 100% fragmented?
> >> >
> >> > What are those indexes with a 't' in their name?
> >> >
> >> > Thanks for any explanation.
> >> >
> >> >
> >> > "Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
> >> > news:44EDD97B.5050604@.realsqlguy.com...
> >> >> Don Miller wrote:
> >> >> > "Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
> >> >> > news:44EDCBB5.5040808@.realsqlguy.com...
> >> >> >
> >> >> >> 1. Run weekly full backups, and frequent transaction log backups
> >> > during
> >> >> >> the week. I have a script that will do all of this for you:
> >> >> >>
http://realsqlguy.com/serendipity/archives/11-Send-In-Backup!.html
> >> >> >
> >> >> > I really can't afford to lose any data, but could settle for a
day's
> >> > worth
> >> >> > loss, and have little space for hundreds of transaction logs. Was
> >> > thinking
> >> >> > about complete backups nightly and differential throughout the
day.
> >> >> >
> >> >>
> >> >> You can modify my script to make it backup nightly. Find the first
IF
> >> >> statement, remove the entire line, and replace it with:
> >> >>
> >> >> IF GETDATE() - @.dtLastFullBackup > 1
> >> >>
> >> >> >> 2. Rebuild indexes nightly or weekly, based on their level of
> >> >> >> fragmentation. I have a script that I run nightly, rebuilding
only
> >> >> >> those that are fragmented beyond 30%:
> >> >> >>
> >> >> >
> >> >
> >
http://realsqlguy.com/serendipity/archives/12-Humpty-Dumpty-Sat-On-A-Wall...html
> >> >> >
> >> >> > I have not ever rebuilt indexes or even know how to check if they
> >> >> > are
> >> >> > in
> >> >> > trouble (there must be some way to check). My DB only deals with
> >> > thousands
> >> >> > of rows and so there has been no noticeable degradation in
> > performance.
> >> >> >
> >> >>
> >> >> This script will do all of that for you, you just have to schedule
it
> > to
> >> >> run.
> >> >>
> >> >>
> >> >>
> >> >> --
> >> >> Tracy McKibben
> >> >> MCDBA
> >> >> http://www.realsqlguy.com
> >> >
> >> >
> >>
> >>
> >
> >
>|||That is good that you don't have much fragmentation but be aware there are
differences between INDEXDEFRAG and DBREINDEX. See this for more details:
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx
I wouldn't worry about the files. If you don't know you probably never
created anything other than the defaults.
--
Andrew J. Kelly SQL MVP
"Don Miller" <nospam@.nospam.com> wrote in message
news:Oqm%23YmGyGHA.2400@.TK2MSFTNGP03.phx.gbl...
> Again, what I did was a DBREINDEX on a database that had never had this
> done
> before (using Tracy McKibben's script to rebuild all indexes - recovered a
> LOT of pages or potential disk space), then SHRINKDB (recovered the file
> space on the hard drive), and then I found the very high Logical Scan
> Fragmentation (after the shrink). I then did a defrag of all tables and
> indices using INDEXDEFRAG (using the script from BOL). Then for the same
> example table that I did a SHOWCONTIG before (below) I got this new report
> (and it looks pretty good to me as far as scan density, fragmentation,
> etc.). So, I *think* I'm OK with a final db size one-sixth the size of
> what
> I had before which already makes backups/restores about six times faster
> and
> my backup of db backup files to a remote online service faster..
> DBCC SHOWCONTIG scanning 'PtHxEvents' table...
> Table: 'PtHxEvents' (1366295927); index ID: 1, database ID: 15
> TABLE level scan performed.
> - Pages Scanned........................: 1524
> - Extents Scanned.......................: 194
> - Extent Switches.......................: 193
> - Avg. Pages per Extent..................: 7.9
> - Scan Density [Best Count:Actual Count]......: 98.45% [191:194]
> - Logical Scan Fragmentation ..............: 0.52%
> - Extent Scan Fragmentation ...............: 12.89%
> - Avg. Bytes Free per Page................: 33.2
> - Avg. Page Density (full)................: 99.59%
> DBCC execution completed. If DBCC printed error messages, contact your
> system administrator.
>> How many files do you have in the Filegroup where this table lives?
> I wouldn't even know where to look.
>
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:ec85Z3FyGHA.3492@.TK2MSFTNGP02.phx.gbl...
>> Don,
>> I really can't see where the 99% Logical fragmentation is coming from. I
>> would try issuing a DBCC DBREINDEX onthe table again and see what it
>> says.
>> How many files do you have in the Filegroup where this table lives?
>> --
>> Andrew J. Kelly SQL MVP
>> "Don Miller" <nospam@.nospam.com> wrote in message
>> news:%23Jo$O1EyGHA.2220@.TK2MSFTNGP02.phx.gbl...
>> > Maybe this is simpler (after DBREINDEX and SHRINKDB):
>> >
>> > DBCC SHOWCONTIG scanning 'PtHxEvents' table...
>> > Table: 'PtHxEvents' (1366295927); index ID: 1, database ID: 15
>> > TABLE level scan performed.
>> > - Pages Scanned........................: 1524
>> > - Extents Scanned.......................: 194
>> > - Extent Switches.......................: 194
>> > - Avg. Pages per Extent..................: 7.9
>> > - Scan Density [Best Count:Actual Count]......: 97.95% [191:195]
>> > - Logical Scan Fragmentation ..............: 99.87%
>> > - Extent Scan Fragmentation ...............: 12.89%
>> > - Avg. Bytes Free per Page................: 33.2
>> > - Avg. Page Density (full)................: 99.59%
>> > DBCC execution completed. If DBCC printed error messages, contact your
>> > system administrator.
>> >
>> > "Don Miller" <nospam@.nospam.com> wrote in message
>> > news:uo0mBoEyGHA.356@.TK2MSFTNGP02.phx.gbl...
>> >> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
>> >> news:OCSStLEyGHA.3844@.TK2MSFTNGP06.phx.gbl...
>> >>
>> >> First of all, I REALLY appreciate your expert help. These remarkable
>> > changes
>> >> in size could be that I shunned maintenance over a period of YEARS. It
>> > could
>> >> easily be that my DB is in actuality so small (<300mb) that
>> >> performance
>> >> issues do not appear and one of the problems was that fragmented
> indexes
>> >> were continually growing the size of the database unnecessarily.
>> >>
>> >> Also, are the indexes with the small "t" in front of the table name
> maybe
>> >> have something to do with clustered indexes?
>> >>
>> >> > Can you post some of the actual results of the showcontig?
>> >>
>> >> This morning, I started over again after restoring to my dev machine,
> the
>> >> last full (simple) backup from production. The first thing I did was
>> >> do
>> > the
>> >> DBREINDEX for all tables in the DB, then a SHRINKDB (all using the
> Simple
>> >> recovery model). The .mdf went 1.4gb to 280mb, and the .ldf went to
> 2mb.
>> >> Here's the showcontig of one table 'Patients' after all of that
> happened
>> > (is
>> >> there an easier to read way to post this?):
>> >>
>> >> IndexName IndexId Level Pages Rows MinimumRecordSize
>> > MaximumRecordSize
>> >> AverageRecordSize ForwardedRecords Extents ExtentSwitches
>> >> AverageFreeBytes
>> >> AveragePageDenisty ScanDensity BestCount ActualCount
> LogicalFragmentation
>> >> ExtentFragmentation
>> >> PK_Patients 1 0 176 NULL NULL NULL NULL NULL 0 23 NULL NULL 92
>> >> 22
>> >> 24
>> >> 99 NULL
>> >> IX_Patients_ClinicalEDC 2 0 10 NULL NULL NULL NULL NULL 0 2 NULL
>> > NULL
>> >> 67 2 3 90 NULL
>> >> IX_Patients_FirstName 3 0 11 NULL NULL NULL NULL NULL 0 4 NULL
> NULL
>> > 40
>> >> 2 5 100 NULL
>> >> IX_Patients_GroupID 4 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL
> 33
>> > 1
>> >> 3 100 NULL
>> >> IX_Patients_LastName 5 0 11 NULL NULL NULL NULL NULL 0 4 NULL
> NULL
>> > 40
>> >> 2 5 100 NULL
>> >> IX_Patients_ProviderID 6 0 8 NULL NULL NULL NULL NULL 0 2 NULL
> NULL
>> > 33
>> >> 1 3 100 NULL
>> >> IX_Patients_PersonID 7 0 8 NULL NULL NULL NULL NULL 0 1 NULL
>> >> NULL
>> >> 50
>> > 1
>> >> 2 100 NULL
>> >> IX_Patients_LastNameSearch 8 0 11 NULL NULL NULL NULL NULL 0 3
> NULL
>> >> NULL 50 2 4 100 NULL
>> >> IX_Patients_FirstNameSearch 9 0 11 NULL NULL NULL NULL NULL 0 1
>> >> NULL
>> >> NULL 100 2 2 82 NULL
>> >> IX_Patients_UserName 10 0 6 NULL NULL NULL NULL NULL 0 0 NULL
> NULL
>> > 100
>> >> 1 1 83 NULL
>> >> IX_Patients_MaritalStatusID 12 0 8 NULL NULL NULL NULL NULL 0 2
>> >> NULL
>> >> NULL 33 1 3 88 NULL
>> >> IX_Patients_updateAuthorID 13 0 8 NULL NULL NULL NULL NULL 0 2
> NULL
>> >> NULL 33 1 3 75 NULL
>> >> IX_Patients_ReligionID 14 0 8 NULL NULL NULL NULL NULL 0 2 NULL
>> >> NULL
>> >> 33 1 3 75 NULL
>> >> IX_Patients_LanguageID 15 0 8 NULL NULL NULL NULL NULL 0 2 NULL
>> >> NULL
>> >> 33 1 3 75 NULL
>> >> IX_Patients_PtRaceID 16 0 8 NULL NULL NULL NULL NULL 0 2 NULL
> NULL
>> > 33
>> >> 1 3 75 NULL
>> >> IX_Patients_AuthorID 17 0 8 NULL NULL NULL NULL NULL 0 2 NULL
> NULL
>> > 33
>> >> 1 3 75 NULL
>> >> IX_Patients_CtRaceID 39 0 8 NULL NULL NULL NULL NULL 0 2 NULL
> NULL
>> > 33
>> >> 1 3 75 NULL
>> >> IX_Patients_FacilityID 40 0 8 NULL NULL NULL NULL NULL 0 2 NULL
>> >> NULL
>> >> 33 1 3 75 NULL
>> >> IX_Patients_OxFinalAuthorDate 49 0 10 NULL NULL NULL NULL NULL 0
> 4
>> >> NULL NULL 40 2 5 90 NULL
>> >> IX_Patients_HospitalID 50 0 8 NULL NULL NULL NULL NULL 0 3 NULL
>> >> NULL
>> >> 25 1 4 75 NULL
>> >> tPatients 255 0 194 15709 79 493 91 0 42 41 546 93 60 25 42 99
>> >> 90
>> >>
>> >>
>> >> > shrinking process usually results in
>> >> > fragmenting your indexes all over again in the process. A rebuild
> with
>> >> DBCC
>> >> > DBREINDEX requires about 1.2 times the size of your db in free space
> to
>> > do
>> >> a
>> >> > proper rebuild.
>> >> >
>> >> > So you always need lots of free space
>> >> > for rebuilds to operate properly.
>> >>
>> >> Do you mean free space on the hard drive (which on my dev machine I
> have
>> >> 30gb to spare and on the production I have 4-5x the size of the
> database)
>> > or
>> >> 'space available' within the database file (like you see when one
> brings
>> > up
>> >> properties for the database)?
>> >>
>> >> > If your db is that small I suspect you have a lot of tables or
>> >> > indexes using mixed extents.
>> >>
>> >> Yes, I have some 90 tables. I don't know what mixed extents means at
> this
>> >> point.
>> >>
>> >> > Essentially anything under 8 pages will share
>> >> > as much as 8 different extents for the 8 pages that normally would
>> >> > go
>> > all
>> >> on
>> >> > 1 extent. If you have less than several hundred to a thousand pages
> in
>> > the
>> >> > table I would not be too concerned with some fragmentation.
>> >>
>> >> My table with the most rows has about 1500 pages
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> Have a look at
>> >> > these as well:
>> >> >
>> >> >
>> >> > http://www.karaszi.com/SQLServer/info_dont_shrink.asp Shrinking
>> >> > considerations
>> >> >
>> >> >
>> >>
>> >
> http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx
>> >> > Index Defrag Best Practices 2000
>> >> > http://www.sql-server-performance.com/dt_dbcc_showcontig.asp
>> >> > Understanding DBCC SHOWCONTIG
>> >> >
>> >>
>> >
> http://www.sqlservercentral.com/columnists/jweisbecker/amethodologyfordeterminingfillfactors.asp
>> >> > Fill Factors
>> >> > http://www.sql-server-performance.com/gv_clustered_indexes.asp
>> >> > Clustered Indexes
>> >> >
>> >> >
>> >> >
>> >> > --
>> >> > Andrew J. Kelly SQL MVP
>> >> >
>> >> > "Don Miller" <nospam@.nospam.com> wrote in message
>> >> > news:uUbZgwDyGHA.3512@.TK2MSFTNGP04.phx.gbl...
>> >> > > I've been playing with your script (on my production db moved to
>> >> > > my
>> > dev
>> >> > > machine) since I have been very remiss in maintaining my database.
> I
>> > ran
>> >> > > the
>> >> > > 'defrag' option on the db (all indexes were about 50% fragmented),
>> > then
>> >> I
>> >> > > did a SHRINKDB, and saw the size of my .mdf (and backup files) go
>> >> > > from
>> >> > > 1.3GB
>> >> > > to around 200MB!
>> >> > >
>> >> > > Then I read about DBREINDEX and saw that your script handled that
>> >> > > too.
>> >> So,
>> >> > > I
>> >> > > ran the rebuild script (on a pristine copy of my prod db without
>> >> > > doing
>> >> the
>> >> > > defrag first), shrunk the database again, and again got the same
>> >> reduction
>> >> > > in file size. However, when I ran your script in the report mode
>> >> > > AFTER
>> >> the
>> >> > > rebuild and shrink, almost ALL of the files are reported as
>> >> > > 75-100%
>> >> > > fragmented!!! Also, there are indexes showing up in the report as
>> > being
>> >> > > 100%
>> >> > > fragmented that I can't even find. They are all named like
>> >> > > NameOfTable.tNameOfTable with a 't' prefix rather than 'IX_'
>> >> > > there.
>> >> > >
>> >> > > Is there a reason that after a DBREINDEX all the indexes are
> reported
>> > as
>> >> > > almost 100% fragmented?
>> >> > >
>> >> > > What are those indexes with a 't' in their name?
>> >> > >
>> >> > > Thanks for any explanation.
>> >> > >
>> >> > >
>> >> > > "Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
>> >> > > news:44EDD97B.5050604@.realsqlguy.com...
>> >> > >> Don Miller wrote:
>> >> > >> > "Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
>> >> > >> > news:44EDCBB5.5040808@.realsqlguy.com...
>> >> > >> >
>> >> > >> >> 1. Run weekly full backups, and frequent transaction log
> backups
>> >> > > during
>> >> > >> >> the week. I have a script that will do all of this for you:
>> >> > >> >>
> http://realsqlguy.com/serendipity/archives/11-Send-In-Backup!.html
>> >> > >> >
>> >> > >> > I really can't afford to lose any data, but could settle for a
>> > day's
>> >> > > worth
>> >> > >> > loss, and have little space for hundreds of transaction logs.
> Was
>> >> > > thinking
>> >> > >> > about complete backups nightly and differential throughout the
>> >> > >> > day.
>> >> > >> >
>> >> > >>
>> >> > >> You can modify my script to make it backup nightly. Find the
> first
>> > IF
>> >> > >> statement, remove the entire line, and replace it with:
>> >> > >>
>> >> > >> IF GETDATE() - @.dtLastFullBackup > 1
>> >> > >>
>> >> > >> >> 2. Rebuild indexes nightly or weekly, based on their level of
>> >> > >> >> fragmentation. I have a script that I run nightly, rebuilding
>> > only
>> >> > >> >> those that are fragmented beyond 30%:
>> >> > >> >>
>> >> > >> >
>> >> > >
>> >>
>> >
> http://realsqlguy.com/serendipity/archives/12-Humpty-Dumpty-Sat-On-A-Wall...html
>> >> > >> >
>> >> > >> > I have not ever rebuilt indexes or even know how to check if
> they
>> > are
>> >> > >> > in
>> >> > >> > trouble (there must be some way to check). My DB only deals
>> >> > >> > with
>> >> > > thousands
>> >> > >> > of rows and so there has been no noticeable degradation in
>> >> performance.
>> >> > >> >
>> >> > >>
>> >> > >> This script will do all of that for you, you just have to
>> >> > >> schedule
>> >> > >> it
>> >> to
>> >> > >> run.
>> >> > >>
>> >> > >>
>> >> > >>
>> >> > >> --
>> >> > >> Tracy McKibben
>> >> > >> MCDBA
>> >> > >> http://www.realsqlguy.com
>> >> > >
>> >> > >
>> >> >
>> >> >
>> >>
>> >>
>> >
>> >
>>
>|||The index mainly needs to be on the other table being joined more than the
main one. If you are joining Patient with marital status the WHERE clause
should be against the Patient table (IE: Patient ID) which needs an index.
But for Marital status the index will need to be on the Marital Status table
for the PK. If you have RI between the two you may need both.
> If I were looking for all patients who are married (linked to a primary
> key
> table with an ID) and maybe are Caucasian, why wouldn't indexing all
> patients by marital status (and race) in the Patient table be faster for
> that query?
That would be faster IF the number matching your query were in ther few %
range I mentioned. But if 50% are married and Caucasian SQL Server will scan
the table instead of using the index. You can see what it uses with the
graphical showplan in Query Analyzer. There is no need to guess.
Occasionally you need to run DBCC UPDATEUSAGE onthe db to keep the free
space stats up to date. Try that and then look at the properties. You may
also want to right click on the database node in EM and choose View -
Taskpad to see how much data / free space there is for each file and the log
file.
--
Andrew J. Kelly SQL MVP
"Don Miller" <nospam@.nospam.com> wrote in message
news:ewOY2DHyGHA.3844@.TK2MSFTNGP06.phx.gbl...
> Again, I appreciate your comments.
>> But for this Patients table I would get rid of
>> the unused indexes because boy you sure have a lot of them<g>. If the
> index
>> has a very low selectivity (meaning there are a lot of the same values)
> the
>> nonclustered index is most likely not even being used.
> I *think* the reason I have so many indices is that I read somewhere at
> some
> time by someone I trusted, that if a column is a foreign key EVER used in
> a
> join to the primary key table (or in a WHERE clause), it should be
> indexed.
> If I misread or misunderstood that premise than I do have a LOT of
> possibly
> unnecessary indices.
>> I don't have the
>> table DDL so this is a guess but columns such as Race, Religion, marital
>> status and even HospitalID probably have a lot of rows that match the
>> same
>> value.
> Yes.
>> For instance let take Marital Status. I bet most match 30 or more %
>> of the rows. A nonclustered index for a simple lookup is usually useless
>> after just a few %, sometimes as low as 2 or 3% even.
> If I were looking for all patients who are married (linked to a primary
> key
> table with an ID) and maybe are Caucasian, why wouldn't indexing all
> patients by marital status (and race) in the Patient table be faster for
> that query?
>> What about the first &
>> last name indexes? There is a first & last name search index as well.
>> What is the difference between the two?
> We actually encrypt all patient-identifying information like first and
> last
> name, and since searching by patient name is the most frequently used
> function in our application, we essentially denormalized the search
> strings
> rather than do that on the fly with encrypting and decrypting (using
> asynchronous calls to the server with Ajax).
>> And by free space I mean in the data file not the hard drive. The first
> link
>> I sent should clear that up for you.
> When I look at db properties after all of this reindexing and shrinking it
> says available space is like close to 0. I have used the defaults for the
> database to grow automatically by 10%.
>> --
>> Andrew J. Kelly SQL MVP
>> "Don Miller" <nospam@.nospam.com> wrote in message
>> news:uo0mBoEyGHA.356@.TK2MSFTNGP02.phx.gbl...
>> > "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
>> > news:OCSStLEyGHA.3844@.TK2MSFTNGP06.phx.gbl...
>> >
>> > First of all, I REALLY appreciate your expert help. These remarkable
>> > changes
>> > in size could be that I shunned maintenance over a period of YEARS. It
>> > could
>> > easily be that my DB is in actuality so small (<300mb) that performance
>> > issues do not appear and one of the problems was that fragmented
>> > indexes
>> > were continually growing the size of the database unnecessarily.
>> >
>> > Also, are the indexes with the small "t" in front of the table name
> maybe
>> > have something to do with clustered indexes?
>> >
>> >> Can you post some of the actual results of the showcontig?
>> >
>> > This morning, I started over again after restoring to my dev machine,
> the
>> > last full (simple) backup from production. The first thing I did was do
>> > the
>> > DBREINDEX for all tables in the DB, then a SHRINKDB (all using the
> Simple
>> > recovery model). The .mdf went 1.4gb to 280mb, and the .ldf went to
>> > 2mb.
>> > Here's the showcontig of one table 'Patients' after all of that
>> > happened
>> > (is
>> > there an easier to read way to post this?):
>> >
>> > IndexName IndexId Level Pages Rows MinimumRecordSize
>> > MaximumRecordSize
>> > AverageRecordSize ForwardedRecords Extents ExtentSwitches
> AverageFreeBytes
>> > AveragePageDenisty ScanDensity BestCount ActualCount
> LogicalFragmentation
>> > ExtentFragmentation
>> > PK_Patients 1 0 176 NULL NULL NULL NULL NULL 0 23 NULL NULL 92 22
> 24
>> > 99 NULL
>> > IX_Patients_ClinicalEDC 2 0 10 NULL NULL NULL NULL NULL 0 2 NULL
> NULL
>> > 67 2 3 90 NULL
>> > IX_Patients_FirstName 3 0 11 NULL NULL NULL NULL NULL 0 4 NULL
>> > NULL
>> > 40
>> > 2 5 100 NULL
>> > IX_Patients_GroupID 4 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL
>> > 33
> 1
>> > 3 100 NULL
>> > IX_Patients_LastName 5 0 11 NULL NULL NULL NULL NULL 0 4 NULL NULL
> 40
>> > 2 5 100 NULL
>> > IX_Patients_ProviderID 6 0 8 NULL NULL NULL NULL NULL 0 2 NULL
>> > NULL
>> > 33
>> > 1 3 100 NULL
>> > IX_Patients_PersonID 7 0 8 NULL NULL NULL NULL NULL 0 1 NULL NULL
> 50
>> > 1
>> > 2 100 NULL
>> > IX_Patients_LastNameSearch 8 0 11 NULL NULL NULL NULL NULL 0 3
>> > NULL
>> > NULL 50 2 4 100 NULL
>> > IX_Patients_FirstNameSearch 9 0 11 NULL NULL NULL NULL NULL 0 1
> NULL
>> > NULL 100 2 2 82 NULL
>> > IX_Patients_UserName 10 0 6 NULL NULL NULL NULL NULL 0 0 NULL NULL
>> > 100
>> > 1 1 83 NULL
>> > IX_Patients_MaritalStatusID 12 0 8 NULL NULL NULL NULL NULL 0 2
> NULL
>> > NULL 33 1 3 88 NULL
>> > IX_Patients_updateAuthorID 13 0 8 NULL NULL NULL NULL NULL 0 2
>> > NULL
>> > NULL 33 1 3 75 NULL
>> > IX_Patients_ReligionID 14 0 8 NULL NULL NULL NULL NULL 0 2 NULL
> NULL
>> > 33 1 3 75 NULL
>> > IX_Patients_LanguageID 15 0 8 NULL NULL NULL NULL NULL 0 2 NULL
> NULL
>> > 33 1 3 75 NULL
>> > IX_Patients_PtRaceID 16 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL
> 33
>> > 1 3 75 NULL
>> > IX_Patients_AuthorID 17 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL
> 33
>> > 1 3 75 NULL
>> > IX_Patients_CtRaceID 39 0 8 NULL NULL NULL NULL NULL 0 2 NULL NULL
> 33
>> > 1 3 75 NULL
>> > IX_Patients_FacilityID 40 0 8 NULL NULL NULL NULL NULL 0 2 NULL
> NULL
>> > 33 1 3 75 NULL
>> > IX_Patients_OxFinalAuthorDate 49 0 10 NULL NULL NULL NULL NULL 0 4
>> > NULL NULL 40 2 5 90 NULL
>> > IX_Patients_HospitalID 50 0 8 NULL NULL NULL NULL NULL 0 3 NULL
> NULL
>> > 25 1 4 75 NULL
>> > tPatients 255 0 194 15709 79 493 91 0 42 41 546 93 60 25 42 99 90
>> >
>> >
>> >> shrinking process usually results in
>> >> fragmenting your indexes all over again in the process. A rebuild with
>> > DBCC
>> >> DBREINDEX requires about 1.2 times the size of your db in free space
>> >> to
>> >> do
>> > a
>> >> proper rebuild.
>> >>
>> >> So you always need lots of free space
>> >> for rebuilds to operate properly.
>> >
>> > Do you mean free space on the hard drive (which on my dev machine I
>> > have
>> > 30gb to spare and on the production I have 4-5x the size of the
> database)
>> > or
>> > 'space available' within the database file (like you see when one
>> > brings
>> > up
>> > properties for the database)?
>> >
>> >> If your db is that small I suspect you have a lot of tables or
>> >> indexes using mixed extents.
>> >
>> > Yes, I have some 90 tables. I don't know what mixed extents means at
> this
>> > point.
>> >
>> >> Essentially anything under 8 pages will share
>> >> as much as 8 different extents for the 8 pages that normally would go
> all
>> > on
>> >> 1 extent. If you have less than several hundred to a thousand pages in
>> >> the
>> >> table I would not be too concerned with some fragmentation.
>> >
>> > My table with the most rows has about 1500 pages
>> >
>> >
>> >
>> >
>> >
>> > Have a look at
>> >> these as well:
>> >>
>> >>
>> >> http://www.karaszi.com/SQLServer/info_dont_shrink.asp Shrinking
>> >> considerations
>> >>
>> >>
>> >
> http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx
>> >> Index Defrag Best Practices 2000
>> >> http://www.sql-server-performance.com/dt_dbcc_showcontig.asp
>> >> Understanding DBCC SHOWCONTIG
>> >>
>> >
> http://www.sqlservercentral.com/columnists/jweisbecker/amethodologyfordeterminingfillfactors.asp
>> >> Fill Factors
>> >> http://www.sql-server-performance.com/gv_clustered_indexes.asp
>> >> Clustered Indexes
>> >>
>> >>
>> >>
>> >> --
>> >> Andrew J. Kelly SQL MVP
>> >>
>> >> "Don Miller" <nospam@.nospam.com> wrote in message
>> >> news:uUbZgwDyGHA.3512@.TK2MSFTNGP04.phx.gbl...
>> >> > I've been playing with your script (on my production db moved to my
> dev
>> >> > machine) since I have been very remiss in maintaining my database. I
>> >> > ran
>> >> > the
>> >> > 'defrag' option on the db (all indexes were about 50% fragmented),
> then
>> > I
>> >> > did a SHRINKDB, and saw the size of my .mdf (and backup files) go
> from
>> >> > 1.3GB
>> >> > to around 200MB!
>> >> >
>> >> > Then I read about DBREINDEX and saw that your script handled that
> too.
>> > So,
>> >> > I
>> >> > ran the rebuild script (on a pristine copy of my prod db without
> doing
>> > the
>> >> > defrag first), shrunk the database again, and again got the same
>> > reduction
>> >> > in file size. However, when I ran your script in the report mode
> AFTER
>> > the
>> >> > rebuild and shrink, almost ALL of the files are reported as 75-100%
>> >> > fragmented!!! Also, there are indexes showing up in the report as
> being
>> >> > 100%
>> >> > fragmented that I can't even find. They are all named like
>> >> > NameOfTable.tNameOfTable with a 't' prefix rather than 'IX_' there.
>> >> >
>> >> > Is there a reason that after a DBREINDEX all the indexes are
>> >> > reported
>> >> > as
>> >> > almost 100% fragmented?
>> >> >
>> >> > What are those indexes with a 't' in their name?
>> >> >
>> >> > Thanks for any explanation.
>> >> >
>> >> >
>> >> > "Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
>> >> > news:44EDD97B.5050604@.realsqlguy.com...
>> >> >> Don Miller wrote:
>> >> >> > "Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
>> >> >> > news:44EDCBB5.5040808@.realsqlguy.com...
>> >> >> >
>> >> >> >> 1. Run weekly full backups, and frequent transaction log
>> >> >> >> backups
>> >> > during
>> >> >> >> the week. I have a script that will do all of this for you:
>> >> >> >>
> http://realsqlguy.com/serendipity/archives/11-Send-In-Backup!.html
>> >> >> >
>> >> >> > I really can't afford to lose any data, but could settle for a
> day's
>> >> > worth
>> >> >> > loss, and have little space for hundreds of transaction logs. Was
>> >> > thinking
>> >> >> > about complete backups nightly and differential throughout the
> day.
>> >> >> >
>> >> >>
>> >> >> You can modify my script to make it backup nightly. Find the first
> IF
>> >> >> statement, remove the entire line, and replace it with:
>> >> >>
>> >> >> IF GETDATE() - @.dtLastFullBackup > 1
>> >> >>
>> >> >> >> 2. Rebuild indexes nightly or weekly, based on their level of
>> >> >> >> fragmentation. I have a script that I run nightly, rebuilding
> only
>> >> >> >> those that are fragmented beyond 30%:
>> >> >> >>
>> >> >> >
>> >> >
>> >
> http://realsqlguy.com/serendipity/archives/12-Humpty-Dumpty-Sat-On-A-Wall...html
>> >> >> >
>> >> >> > I have not ever rebuilt indexes or even know how to check if they
>> >> >> > are
>> >> >> > in
>> >> >> > trouble (there must be some way to check). My DB only deals with
>> >> > thousands
>> >> >> > of rows and so there has been no noticeable degradation in
>> > performance.
>> >> >> >
>> >> >>
>> >> >> This script will do all of that for you, you just have to schedule
> it
>> > to
>> >> >> run.
>> >> >>
>> >> >>
>> >> >>
>> >> >> --
>> >> >> Tracy McKibben
>> >> >> MCDBA
>> >> >> http://www.realsqlguy.com
>> >> >
>> >> >
>> >>
>> >>
>> >
>> >
>>
>|||Don Miller wrote:
> I've been playing with your script (on my production db moved to my dev
> machine) since I have been very remiss in maintaining my database. I ran the
> 'defrag' option on the db (all indexes were about 50% fragmented), then I
> did a SHRINKDB, and saw the size of my .mdf (and backup files) go from 1.3GB
> to around 200MB!
> Then I read about DBREINDEX and saw that your script handled that too. So, I
> ran the rebuild script (on a pristine copy of my prod db without doing the
> defrag first), shrunk the database again, and again got the same reduction
> in file size. However, when I ran your script in the report mode AFTER the
> rebuild and shrink, almost ALL of the files are reported as 75-100%
> fragmented!!! Also, there are indexes showing up in the report as being 100%
> fragmented that I can't even find. They are all named like
> NameOfTable.tNameOfTable with a 't' prefix rather than 'IX_' there.
> Is there a reason that after a DBREINDEX all the indexes are reported as
> almost 100% fragmented?
> What are those indexes with a 't' in their name?
> Thanks for any explanation.
>
Sorry for the delayed response, I took a long weekend... Seems like
Andrew has explained everything pretty well, but I'll toss in a couple
of comments. The massive change in your database size is due to the
fact that your indexes, after being defragged, are now tightly compacted
into clean, contiguous units, instead of being "full of holes", so to
speak. Unfortunately, when you shrank the database, that was all
undone. Shrinking the database will reorganize data pages to make the
files as small as possible. This often results in pages being seperated
from their peers, and being scattered throughout the database.
My suggestion would be to add another 100MB to your database, and then
run the defrag script again. You should see a much different result.
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||As a workaround (because it worked), I wrote a script that first rebuilt the
indexes for all tables in my database using DBREINDEX (I think I grabbed it
off BOL), then did a DBCC SHRINKDATABASE (myDB, 10), and then used major
portions of your script to fix all of the now 99% fragmented indexes using
DBCC INDEXDEFRAG. Then, in the end, my DB was a tenth of its size, and the
indexes had high scan densities and low fragmentation.
> My suggestion would be to add another 100MB to your database, and then
> run the defrag script again. You should see a much different result.
I don't know how to add 100mb to my database.
During my experimentation, when I did SHRINKDATABASE without touching the
indexes, nothing happened to physical file sizes. I'm also pretty sure when
I ran the defrag script alone on the bloated DB, there was no reduction in
the physical file sizes either. It does seem that the indexes have to be
"fixed" before the physical files can become shorter. So, I somehow came up
with the script above and it worked. If I could eliminate either the first
DBREINDEX or the second INDEXDEFRAG, that would be wonderful.
Thanks for your help (and next weekend should be even longer).
"Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
news:44F2EF97.907@.realsqlguy.com...
> Don Miller wrote:
> > I've been playing with your script (on my production db moved to my dev
> > machine) since I have been very remiss in maintaining my database. I ran
the
> > 'defrag' option on the db (all indexes were about 50% fragmented), then
I
> > did a SHRINKDB, and saw the size of my .mdf (and backup files) go from
1.3GB
> > to around 200MB!
> >
> > Then I read about DBREINDEX and saw that your script handled that too.
So, I
> > ran the rebuild script (on a pristine copy of my prod db without doing
the
> > defrag first), shrunk the database again, and again got the same
reduction
> > in file size. However, when I ran your script in the report mode AFTER
the
> > rebuild and shrink, almost ALL of the files are reported as 75-100%
> > fragmented!!! Also, there are indexes showing up in the report as being
100%
> > fragmented that I can't even find. They are all named like
> > NameOfTable.tNameOfTable with a 't' prefix rather than 'IX_' there.
> >
> > Is there a reason that after a DBREINDEX all the indexes are reported as
> > almost 100% fragmented?
> >
> > What are those indexes with a 't' in their name?
> >
> > Thanks for any explanation.
> >
> Sorry for the delayed response, I took a long weekend... Seems like
> Andrew has explained everything pretty well, but I'll toss in a couple
> of comments. The massive change in your database size is due to the
> fact that your indexes, after being defragged, are now tightly compacted
> into clean, contiguous units, instead of being "full of holes", so to
> speak. Unfortunately, when you shrank the database, that was all
> undone. Shrinking the database will reorganize data pages to make the
> files as small as possible. This often results in pages being seperated
> from their peers, and being scattered throughout the database.
> My suggestion would be to add another 100MB to your database, and then
> run the defrag script again. You should see a much different result.
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com