Thursday, March 29, 2012
DBA for multiple servers
If so what are the main startegy and main challenges you had? what is
your best practice in terms of day to day admin and backup and
recovery?soalvajavab1@.yahoo.com wrote:
> Is anybody has experience to DBA more than 100 SQl Servers at a time?
> If so what are the main startegy and main challenges you had? what is
> your best practice in terms of day to day admin and backup and
> recovery?
>
I don't think that tasks are that much different when you have 5 or 100
SQL servers, so could you be a bit more specific in what you'd like to
know? From my point of view it's quite important that the backup jobs
works. I've written my own scripts that automatically backups up all
databases on a SQL server, so once these has been scheduled they just
notify me when they fails. You'd then also like to monitor the free
space on your disks on the server, but for that the company most likely
has a monitoring tool for that.
In terms of what more you'd like to do is a matter of personal
preference and company policy. E.g. I know that many people don't like
to have their databases set to autogrow, but I normally do that for most
databases. Currently I'm looking after approx. 1000 SQL server databases
and for me it works fine to have them set to autogrow. I've then created
a stored proc that runs every night on most of the servers and collect a
number of data for each database. This tells me when there has been
created new databases, when a database has been deleted, file growth,
backup file growth, recovery mode change etc. I then get an email every
morning with the changes.
Apart from the above, there are a number of database monitoring tools
available on the market. I haven't actually used any of them yet but I
think they can do some of what I've done myself (..and of course a lot
more...). Maybe somebody else has some experience on this?
Bottomline is that what you need to look for isn't a standard answer,
but pretty much depends on the setup and your company/company policy. I
also think that the DBA role can be quite different from one company to
another.
--
Regards
Steen Schlüter Persson
Database Administrator / System Administrator|||I think that with that number-- you'll just need to follow these guidelines:
a) use profiler on all machines
b) run index tuning wizard on your busiest databases
c) build poormon enterprise level performan aggregation tools
d) encourage people to use Reporting Services and Analysis Services in order
to leverage this mountain of data
<soalvajavab1@.yahoo.com> wrote in message
news:1177946795.389483.51580@.y80g2000hsf.googlegroups.com...
> Is anybody has experience to DBA more than 100 SQl Servers at a time?
> If so what are the main startegy and main challenges you had? what is
> your best practice in terms of day to day admin and backup and
> recovery?
>
DB2 Viper 2 beta - Compatibility features, Need your help
My fellow team mates had some extra time on their hands so we decided to
spice up DB2 with a grab-bag of compatibility features.
We wouldn't mind help validating the semantics match...
So for those of you blessed or cursed with a competitive DBMS here is a
list of what we've added:
1. ** DUAL
Always works without prefixing a schema
2. ** ROWNUM
3. ** (+) outer join syntax
4. LEAST/GREATEST/NVL/DECODE
5. TO_DATE/TO_CHAR improvement
DB2 supports most common patterns except those requiring language
awareness
6. ** CONNECT BY
This is a function drop, performance drop will follow
7. A slew of syntactic sugar like:
Seq.NEXTVAL and seq.CURRVAL notation
UNIQUE instead of DISTINCT
MINUS instead of EXCEPT
Unnamed nested subqueries (aka inline views)
"SELECT * FROM (SELECT * FROM T)"
CROSSJOIN
8. BITAND/BITOR/....
The features marked with ** require a registry setting:
db2set DB2_COMPATIBILITY_VECTOR=3F
should switch everything on.
There are other features those porting apps will value:
GLOBAL VARIABLES
A new ARRAY data type
A new RID() function can be used to map ROWID
Docs: https://publib.boulder.ibm.com/info.../v9r5/index.jsp
Enjoy
Serge
--
Serge Rielau
DB2 Solutions Development
IBM Toronto LabSerge Rielau wrote:
Quote:
Originally Posted by
Hi folks,
>
My fellow team mates had some extra time on their hands so we decided to
spice up DB2 with a grab-bag of compatibility features.
We wouldn't mind help validating the semantics match...
So for those of you blessed or cursed with a competitive DBMS here is a
list of what we've added:
>
1. ** DUAL
Always works without prefixing a schema
2. ** ROWNUM
3. ** (+) outer join syntax
4. LEAST/GREATEST/NVL/DECODE
5. TO_DATE/TO_CHAR improvement
DB2 supports most common patterns except those requiring language
awareness
6. ** CONNECT BY
This is a function drop, performance drop will follow
7. A slew of syntactic sugar like:
Seq.NEXTVAL and seq.CURRVAL notation
UNIQUE instead of DISTINCT
MINUS instead of EXCEPT
Unnamed nested subqueries (aka inline views)
"SELECT * FROM (SELECT * FROM T)"
CROSSJOIN
8. BITAND/BITOR/....
>
The features marked with ** require a registry setting:
db2set DB2_COMPATIBILITY_VECTOR=3F
should switch everything on.
>
There are other features those porting apps will value:
GLOBAL VARIABLES
A new ARRAY data type
A new RID() function can be used to map ROWID
>
Docs: https://publib.boulder.ibm.com/info.../v9r5/index.jsp
>
Enjoy
Serge
With little more work and you'll be compatible with Oracle. <g>
Well done.
--
Daniel A. Morgan
University of Washington
damorgan@.x.washington.edu (replace x with u to respond)|||DA Morgan wrote:
Quote:
Originally Posted by
Well done.
Not well enough. I hit cancel within 10 seconds of posting to this
group, apparently to late. If you want to flame please do so in c.d.ibm-db2.
Let's leave the SQL Server crowd in peace...
Cheers
Serge
--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
Tuesday, March 27, 2012
DB2 date & Time datatype migration to SQL Server datetime
We are migrating our database from DB2 8 to SQL Server 2005. We have date and time saperate columns in DB2. For example, Date_of_birth, Store_sun_open_time, Store_sun_close_time etc. For date we are using datetime. For time what datatype should we use in SQL Server?
Thanks
PrashantThe SQL Server datetime data type includes both date and time. For instance here is the result of the sql statement selecting the current date and time:
select getdate()
returns the value
2006-02-15 06:47:45.270
You can create your datetime value as a string and populate your datetime variables from that string. The single space between the day and hour components in the string is critical.
You can use several date formats (see Books Online). I prefer the 'yyyy-mm-dd hh:mm:ss.ms" format in a 24 hour format. The datetime datatype precision is 3 milliseconds (ms). Missing time components will default to zero, so for example if you only have the hour and minutes value (i.e 14:45) the time portion of the column would be 14:45:00.000.
Hope this helps|||I guess I was not very clear with my question, I will rephrase.
In DB2 table I have columns for "Time" datatype. For example, store_sunday_close_time. No date is attached to it. Now when recreating this column SQL Server, what datatype should I use as there is not "Time" datatype. If we use "datetime" datatype, what date should we enter?|||If you do not have a date to associate with the time, you are out of luck. The datetime data type requires both, otherwise your date part will be set to 1900-01-01.
So it looks like you will have to us a char or varchar to store your time component unless you can live with 1900-01-01 as the date (for example you are going to subtract the store_close_time from the store_open_time.
Thursday, March 22, 2012
DB size is morethan 2 GB how to optimize
Is there any other settings for managing DB size is more than 2 GB, it seems
it takes long time to retives the data. please give me idea.
Thanks,
Dinesh Bhandare
Are you using MSDE by any chance? What makes you feel it takes long time ...
I've databases that are more than 8-10 GB in size and they yet perform good
under load ...
HTH,
Vinod Kumar
MCSE, DBA, MCAD, MCSD
http://www.extremeexperts.com
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
"Dinesh Bhandare" <DineshBhandare@.discussions.microsoft.com> wrote in
message news:B42CBC39-EA49-40FD-B6F2-97DAF84CD4FB@.microsoft.com...
> Dears,
> Is there any other settings for managing DB size is more than 2 GB, it
seems
> it takes long time to retives the data. please give me idea.
> Thanks,
> Dinesh Bhandare
|||Dinesh
2gb is not a very large database. You should be able to get very good
performance from it. So many things it could be.
A few things to try. Run profiler to capture your longest running queries.
Put those queries into query anayser and check the execution plans. If there
are problems with the code or lack of correct indexes, it should be easy to
spot. Here are some good tips on using profiler.
http://www.sql-server-performance.co...filer_tips.asp
Run dbcc showcontig on all your tables to check your tables for
fragmentation.
You do keep your database files, transaction logs and backups on seperate
disks, don't you? If you don't try to move them so you do.
Use perfmon to check your memory and disk I/O are not having problems. Here
is a link to a good article on using perfmon.
http://www.sql-server-performance.co...nitor_tips.asp
There are lots of good articles on performance on
http://www.sql-server-performance.com/ have a good look through them and see
if you can identufy which areas may be affecting you.
If you are using a LAN or WAN, might be worth asking your network guys to do
a check, might not be a SQL Server problem at all.
These tips should at least get you started. There are so many things it can
be.
Hope this helps.
John
"Dinesh Bhandare" wrote:
> Dears,
> Is there any other settings for managing DB size is more than 2 GB, it seems
> it takes long time to retives the data. please give me idea.
> Thanks,
> Dinesh Bhandare
|||To add to the other responses, appropriate indexing is an important
performance consideration. Large tables are less forgiving when you don't
have indexes in place to efficiently process queries. Review your query
plans. Columns referenced in joins and predicates may be good candidates
for indexing.
Hope this helps.
Dan Guzman
SQL Server MVP
"Dinesh Bhandare" <DineshBhandare@.discussions.microsoft.com> wrote in
message news:B42CBC39-EA49-40FD-B6F2-97DAF84CD4FB@.microsoft.com...
> Dears,
> Is there any other settings for managing DB size is more than 2 GB, it
> seems
> it takes long time to retives the data. please give me idea.
> Thanks,
> Dinesh Bhandare
sql
DB size is morethan 2 GB how to optimize
Is there any other settings for managing DB size is more than 2 GB, it seems
it takes long time to retives the data. please give me idea.
Thanks,
Dinesh BhandareAre you using MSDE by any chance? What makes you feel it takes long time ...
I've databases that are more than 8-10 GB in size and they yet perform good
under load ...
--
HTH,
Vinod Kumar
MCSE, DBA, MCAD, MCSD
http://www.extremeexperts.com
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinfo/productdoc/2000/books.asp
"Dinesh Bhandare" <DineshBhandare@.discussions.microsoft.com> wrote in
message news:B42CBC39-EA49-40FD-B6F2-97DAF84CD4FB@.microsoft.com...
> Dears,
> Is there any other settings for managing DB size is more than 2 GB, it
seems
> it takes long time to retives the data. please give me idea.
> Thanks,
> Dinesh Bhandare|||Dinesh
2gb is not a very large database. You should be able to get very good
performance from it. So many things it could be.
A few things to try. Run profiler to capture your longest running queries.
Put those queries into query anayser and check the execution plans. If there
are problems with the code or lack of correct indexes, it should be easy to
spot. Here are some good tips on using profiler.
http://www.sql-server-performance.com/sql_server_profiler_tips.asp
Run dbcc showcontig on all your tables to check your tables for
fragmentation.
You do keep your database files, transaction logs and backups on seperate
disks, don't you? If you don't try to move them so you do.
Use perfmon to check your memory and disk I/O are not having problems. Here
is a link to a good article on using perfmon.
http://www.sql-server-performance.com/performance_monitor_tips.asp
There are lots of good articles on performance on
http://www.sql-server-performance.com/ have a good look through them and see
if you can identufy which areas may be affecting you.
If you are using a LAN or WAN, might be worth asking your network guys to do
a check, might not be a SQL Server problem at all.
These tips should at least get you started. There are so many things it can
be.
Hope this helps.
John
"Dinesh Bhandare" wrote:
> Dears,
> Is there any other settings for managing DB size is more than 2 GB, it seems
> it takes long time to retives the data. please give me idea.
> Thanks,
> Dinesh Bhandare|||To add to the other responses, appropriate indexing is an important
performance consideration. Large tables are less forgiving when you don't
have indexes in place to efficiently process queries. Review your query
plans. Columns referenced in joins and predicates may be good candidates
for indexing.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"Dinesh Bhandare" <DineshBhandare@.discussions.microsoft.com> wrote in
message news:B42CBC39-EA49-40FD-B6F2-97DAF84CD4FB@.microsoft.com...
> Dears,
> Is there any other settings for managing DB size is more than 2 GB, it
> seems
> it takes long time to retives the data. please give me idea.
> Thanks,
> Dinesh Bhandare
DB size is morethan 2 GB how to optimize
Is there any other settings for managing DB size is more than 2 GB, it seems
it takes long time to retives the data. please give me idea.
Thanks,
Dinesh BhandareAre you using MSDE by any chance? What makes you feel it takes long time ...
I've databases that are more than 8-10 GB in size and they yet perform good
under load ...
HTH,
Vinod Kumar
MCSE, DBA, MCAD, MCSD
http://www.extremeexperts.com
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp
"Dinesh Bhandare" <DineshBhandare@.discussions.microsoft.com> wrote in
message news:B42CBC39-EA49-40FD-B6F2-97DAF84CD4FB@.microsoft.com...
> Dears,
> Is there any other settings for managing DB size is more than 2 GB, it
seems
> it takes long time to retives the data. please give me idea.
> Thanks,
> Dinesh Bhandare|||Dinesh
2gb is not a very large database. You should be able to get very good
performance from it. So many things it could be.
A few things to try. Run profiler to capture your longest running queries.
Put those queries into query anayser and check the execution plans. If there
are problems with the code or lack of correct indexes, it should be easy to
spot. Here are some good tips on using profiler.
http://www.sql-server-performance.c...ofiler_tips.asp
Run dbcc showcontig on all your tables to check your tables for
fragmentation.
You do keep your database files, transaction logs and backups on seperate
disks, don't you? If you don't try to move them so you do.
Use perfmon to check your memory and disk I/O are not having problems. Here
is a link to a good article on using perfmon.
http://www.sql-server-performance.c...onitor_tips.asp
There are lots of good articles on performance on
http://www.sql-server-performance.com/ have a good look through them and see
if you can identufy which areas may be affecting you.
If you are using a LAN or WAN, might be worth asking your network guys to do
a check, might not be a SQL Server problem at all.
These tips should at least get you started. There are so many things it can
be.
Hope this helps.
John
"Dinesh Bhandare" wrote:
> Dears,
> Is there any other settings for managing DB size is more than 2 GB, it see
ms
> it takes long time to retives the data. please give me idea.
> Thanks,
> Dinesh Bhandare|||To add to the other responses, appropriate indexing is an important
performance consideration. Large tables are less forgiving when you don't
have indexes in place to efficiently process queries. Review your query
plans. Columns referenced in joins and predicates may be good candidates
for indexing.
Hope this helps.
Dan Guzman
SQL Server MVP
"Dinesh Bhandare" <DineshBhandare@.discussions.microsoft.com> wrote in
message news:B42CBC39-EA49-40FD-B6F2-97DAF84CD4FB@.microsoft.com...
> Dears,
> Is there any other settings for managing DB size is more than 2 GB, it
> seems
> it takes long time to retives the data. please give me idea.
> Thanks,
> Dinesh Bhandare
Monday, March 19, 2012
DB Restore with Transaction File Only
I have a sql where data files and transaction files are
stored seperatedly... and one time a problem occured when
data files become corrupted and the only thing left is the
transaction file.... and the databases went offline... my
question is can i restore from a full db backup and then
attach the existing transaction files...?
No. It's not possible nor logical to just attach the trx log file to an
existing database. If you have made trx log backups consistently since the
last full backup, you might have a chance of restoring your database to its
previous state prior to the corruption.
Peter Yeoh
http://www.yohz.com
Need smaller SQL2K backup files? Use MiniSQLBackup Lite, free!
"perry" <pramos@.codex-systems.com> wrote in message
news:7c8b01c4956b$985122f0$a601280a@.phx.gbl...
> hi,
> I have a sql where data files and transaction files are
> stored seperatedly... and one time a problem occured when
> data files become corrupted and the only thing left is the
> transaction file.... and the databases went offline... my
> question is can i restore from a full db backup and then
> attach the existing transaction files...?
|||perry,
Yes you can! As long as the transaction log chain (all transaction log
backups since your last full backup) is uninterrupted since your last
full backup it will work great. This scenario is exactly what the
transaction log backups are for. Good luck, if you need further help,
post back.
Look up transaction log in Books online, there's some great information
in there.
Mark Allison, SQL Server MVP
http://www.markallison.co.uk
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602m.html
perry wrote:
> hi,
> I have a sql where data files and transaction files are
> stored seperatedly... and one time a problem occured when
> data files become corrupted and the only thing left is the
> transaction file.... and the databases went offline... my
> question is can i restore from a full db backup and then
> attach the existing transaction files...?
|||perry,
After reading Peter's post I noticed you asked if you can *attach* the
transaction log file. No, you can't, you can however restore the
transaction log chain as mentioned in my post. Sorry for the error.
Mark Allison, SQL Server MVP
http://www.markallison.co.uk
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602m.html
Mark Allison wrote:[vbcol=seagreen]
> perry,
> Yes you can! As long as the transaction log chain (all transaction log
> backups since your last full backup) is uninterrupted since your last
> full backup it will work great. This scenario is exactly what the
> transaction log backups are for. Good luck, if you need further help,
> post back.
> Look up transaction log in Books online, there's some great information
> in there.
> --
> Mark Allison, SQL Server MVP
> http://www.markallison.co.uk
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602m.html
>
> perry wrote:
|||But what if I have a singel transaction log backup which I
have set up to notrucate, waht is the possible data
recovery i can have......will all trasactions prior to
the last full database backup up to the trasaction backup
(which is run daily and overwrites it in a single file)
will be recovered...thanks for the post..your a great
help...
>--Original Message--
>No. It's not possible nor logical to just attach the trx
log file to an
>existing database. If you have made trx log backups
consistently since the
>last full backup, you might have a chance of restoring
your database to its
>previous state prior to the corruption.
>--
>Peter Yeoh
>http://www.yohz.com
>Need smaller SQL2K backup files? Use MiniSQLBackup Lite,
free![vbcol=seagreen]
>
>"perry" <pramos@.codex-systems.com> wrote in message
>news:7c8b01c4956b$985122f0$a601280a@.phx.gbl...
when[vbcol=seagreen]
the[vbcol=seagreen]
my
>
>.
>
|||Perry,
Are you saying that you do BACKUP LOG ... WITH NO_TRUNCATE?
If so, then the transaction log (the ldf file) will grow and grow and the space within never be
reused.
Anyhow, assuming that you do, you might be able to use such a backup. But if you read Books Online,
it only documents that option for doing a transaction log backup when the database is damaged...
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"perry" <anonymous@.discussions.microsoft.com> wrote in message
news:7e8801c4957d$bc828ee0$a501280a@.phx.gbl...[vbcol=seagreen]
>
> But what if I have a singel transaction log backup which I
> have set up to notrucate, waht is the possible data
> recovery i can have......will all trasactions prior to
> the last full database backup up to the trasaction backup
> (which is run daily and overwrites it in a single file)
> will be recovered...thanks for the post..your a great
> help...
>
> log file to an
> consistently since the
> your database to its
> free!
> when
> the
> my
|||Like Tibor, I'm assuming you have consistently backed up the log with the
NO_TRUNCATE option (btw, strange thing to do). However, each of the trx log
backup will only contain the log data since the last log backup. It will
not store whatever log data is in the log file (I'm guessing you use the
NO_TRUNCATE option because you wanted your trx log backups to always contain
all log data since the last full backup). Did a quick test to confirm this.
I guess you now have a very large ldf file, and want to be able to recover
the data into the mdf file. No idea how this can be done, or even possible.
Peter Yeoh
http://www.yohz.com
Need smaller SQL2K backup files? Use MiniSQLBackup Lite, free!
"perry" <anonymous@.discussions.microsoft.com> wrote in message
news:7e8801c4957d$bc828ee0$a501280a@.phx.gbl...[vbcol=seagreen]
>
> But what if I have a singel transaction log backup which I
> have set up to notrucate, waht is the possible data
> recovery i can have......will all trasactions prior to
> the last full database backup up to the trasaction backup
> (which is run daily and overwrites it in a single file)
> will be recovered...thanks for the post..your a great
> help...
>
> log file to an
> consistently since the
> your database to its
> free!
> when
> the
> my
|||Peter,
Interesting. Just so I understand you correctly, a scenario:
0. BACKUP LOG
1. BACKUP DB
2. BACKUP LOG NO_TRUNCATE
3. BACKUP LOG NO_TRUNCATE
4. BACKUP LOG NO_TRUNCATE
Status now is:
The ldf file contains all log records since 0.
The backup file produced in step 4 contains the log records since 3.
Is above what you test showed? If so, then my gut feeling about NO_TRUNCATE is correct and the name
of the parameter is misleading (something I've been trying to communicate to MS with little
success). The name of the option implies that you can keep doing log backups and only apply the
latest. I've always explained this option from what it is documented as in Books Online, to backup
log of a broken database...
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Peter Yeoh" <nospam@.nospam.com> wrote in message news:ePpBp9YlEHA.592@.TK2MSFTNGP11.phx.gbl...
> Like Tibor, I'm assuming you have consistently backed up the log with the
> NO_TRUNCATE option (btw, strange thing to do). However, each of the trx log
> backup will only contain the log data since the last log backup. It will
> not store whatever log data is in the log file (I'm guessing you use the
> NO_TRUNCATE option because you wanted your trx log backups to always contain
> all log data since the last full backup). Did a quick test to confirm this.
> I guess you now have a very large ldf file, and want to be able to recover
> the data into the mdf file. No idea how this can be done, or even possible.
> --
> Peter Yeoh
> http://www.yohz.com
> Need smaller SQL2K backup files? Use MiniSQLBackup Lite, free!
>
> "perry" <anonymous@.discussions.microsoft.com> wrote in message
> news:7e8801c4957d$bc828ee0$a501280a@.phx.gbl...
>
|||> Status now is:
> The ldf file contains all log records since 0.
> The backup file produced in step 4 contains the log records since 3.
Right on. The backup file size in step 4 is not the cumulative size of the
files produced in step 2 and 3.
Peter Yeoh
http://www.yohz.com
Need smaller SQL2K backup files? Use MiniSQLBackup Lite, free!
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in
message news:%23GGEXMZlEHA.2948@.TK2MSFTNGP11.phx.gbl...
> Peter,
> Interesting. Just so I understand you correctly, a scenario:
> 0. BACKUP LOG
> 1. BACKUP DB
> 2. BACKUP LOG NO_TRUNCATE
> 3. BACKUP LOG NO_TRUNCATE
> 4. BACKUP LOG NO_TRUNCATE
> Status now is:
> The ldf file contains all log records since 0.
> The backup file produced in step 4 contains the log records since 3.
> Is above what you test showed? If so, then my gut feeling about
NO_TRUNCATE is correct and the name
> of the parameter is misleading (something I've been trying to communicate
to MS with little
> success). The name of the option implies that you can keep doing log
backups and only apply the
> latest. I've always explained this option from what it is documented as in
Books Online, to backup
> log of a broken database...
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "Peter Yeoh" <nospam@.nospam.com> wrote in message
news:ePpBp9YlEHA.592@.TK2MSFTNGP11.phx.gbl...[vbcol=seagreen]
the[vbcol=seagreen]
log[vbcol=seagreen]
will[vbcol=seagreen]
contain[vbcol=seagreen]
this.[vbcol=seagreen]
recover[vbcol=seagreen]
possible.
>
|||> Right on. The backup file size in step 4 is not the cumulative size of the
> files produced in step 2 and 3.
Indeed! I just wrote an article on the topic, with some SQL code proving the point...
http://www.karaszi.com/SQLServer/inf...o_truncate.asp
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Peter Yeoh" <nospam@.nospam.com> wrote in message news:e$ni%23PZlEHA.2224@.tk2msftngp13.phx.gbl...
> Right on. The backup file size in step 4 is not the cumulative size of the
> files produced in step 2 and 3.
> Peter Yeoh
> http://www.yohz.com
> Need smaller SQL2K backup files? Use MiniSQLBackup Lite, free!
>
> "Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in
> message news:%23GGEXMZlEHA.2948@.TK2MSFTNGP11.phx.gbl...
> NO_TRUNCATE is correct and the name
> to MS with little
> backups and only apply the
> Books Online, to backup
> news:ePpBp9YlEHA.592@.TK2MSFTNGP11.phx.gbl...
> the
> log
> will
> contain
> this.
> recover
> possible.
>
DB Restore with Transaction File Only
I have a sql where data files and transaction files are
stored seperatedly... and one time a problem occured when
data files become corrupted and the only thing left is the
transaction file.... and the databases went offline... my
question is can i restore from a full db backup and then
attach the existing transaction files...?No. It's not possible nor logical to just attach the trx log file to an
existing database. If you have made trx log backups consistently since the
last full backup, you might have a chance of restoring your database to its
previous state prior to the corruption.
--
Peter Yeoh
http://www.yohz.com
Need smaller SQL2K backup files? Use MiniSQLBackup Lite, free!
"perry" <pramos@.codex-systems.com> wrote in message
news:7c8b01c4956b$985122f0$a601280a@.phx.gbl...
> hi,
> I have a sql where data files and transaction files are
> stored seperatedly... and one time a problem occured when
> data files become corrupted and the only thing left is the
> transaction file.... and the databases went offline... my
> question is can i restore from a full db backup and then
> attach the existing transaction files...?|||perry,
Yes you can! As long as the transaction log chain (all transaction log
backups since your last full backup) is uninterrupted since your last
full backup it will work great. This scenario is exactly what the
transaction log backups are for. Good luck, if you need further help,
post back.
Look up transaction log in Books online, there's some great information
in there.
--
Mark Allison, SQL Server MVP
http://www.markallison.co.uk
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602m.html
perry wrote:
> hi,
> I have a sql where data files and transaction files are
> stored seperatedly... and one time a problem occured when
> data files become corrupted and the only thing left is the
> transaction file.... and the databases went offline... my
> question is can i restore from a full db backup and then
> attach the existing transaction files...?|||perry,
After reading Peter's post I noticed you asked if you can *attach* the
transaction log file. No, you can't, you can however restore the
transaction log chain as mentioned in my post. Sorry for the error.
--
Mark Allison, SQL Server MVP
http://www.markallison.co.uk
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602m.html
Mark Allison wrote:
> perry,
> Yes you can! As long as the transaction log chain (all transaction log
> backups since your last full backup) is uninterrupted since your last
> full backup it will work great. This scenario is exactly what the
> transaction log backups are for. Good luck, if you need further help,
> post back.
> Look up transaction log in Books online, there's some great information
> in there.
> --
> Mark Allison, SQL Server MVP
> http://www.markallison.co.uk
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602m.html
>
> perry wrote:
>> hi,
>> I have a sql where data files and transaction files are stored
>> seperatedly... and one time a problem occured when data files become
>> corrupted and the only thing left is the transaction file.... and the
>> databases went offline... my question is can i restore from a full db
>> backup and then attach the existing transaction files...?|||But what if I have a singel transaction log backup which I
have set up to notrucate, waht is the possible data
recovery i can have......will all trasactions prior to
the last full database backup up to the trasaction backup
(which is run daily and overwrites it in a single file)
will be recovered...thanks for the post..your a great
help...
>--Original Message--
>No. It's not possible nor logical to just attach the trx
log file to an
>existing database. If you have made trx log backups
consistently since the
>last full backup, you might have a chance of restoring
your database to its
>previous state prior to the corruption.
>--
>Peter Yeoh
>http://www.yohz.com
>Need smaller SQL2K backup files? Use MiniSQLBackup Lite,
free!
>
>"perry" <pramos@.codex-systems.com> wrote in message
>news:7c8b01c4956b$985122f0$a601280a@.phx.gbl...
>> hi,
>> I have a sql where data files and transaction files are
>> stored seperatedly... and one time a problem occured
when
>> data files become corrupted and the only thing left is
the
>> transaction file.... and the databases went offline...
my
>> question is can i restore from a full db backup and then
>> attach the existing transaction files...?
>
>.
>|||Perry,
Are you saying that you do BACKUP LOG ... WITH NO_TRUNCATE?
If so, then the transaction log (the ldf file) will grow and grow and the space within never be
reused.
Anyhow, assuming that you do, you might be able to use such a backup. But if you read Books Online,
it only documents that option for doing a transaction log backup when the database is damaged...
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"perry" <anonymous@.discussions.microsoft.com> wrote in message
news:7e8801c4957d$bc828ee0$a501280a@.phx.gbl...
>
> But what if I have a singel transaction log backup which I
> have set up to notrucate, waht is the possible data
> recovery i can have......will all trasactions prior to
> the last full database backup up to the trasaction backup
> (which is run daily and overwrites it in a single file)
> will be recovered...thanks for the post..your a great
> help...
>
>>--Original Message--
>>No. It's not possible nor logical to just attach the trx
> log file to an
>>existing database. If you have made trx log backups
> consistently since the
>>last full backup, you might have a chance of restoring
> your database to its
>>previous state prior to the corruption.
>>--
>>Peter Yeoh
>>http://www.yohz.com
>>Need smaller SQL2K backup files? Use MiniSQLBackup Lite,
> free!
>>
>>"perry" <pramos@.codex-systems.com> wrote in message
>>news:7c8b01c4956b$985122f0$a601280a@.phx.gbl...
>> hi,
>> I have a sql where data files and transaction files are
>> stored seperatedly... and one time a problem occured
> when
>> data files become corrupted and the only thing left is
> the
>> transaction file.... and the databases went offline...
> my
>> question is can i restore from a full db backup and then
>> attach the existing transaction files...?
>>
>>.|||Like Tibor, I'm assuming you have consistently backed up the log with the
NO_TRUNCATE option (btw, strange thing to do). However, each of the trx log
backup will only contain the log data since the last log backup. It will
not store whatever log data is in the log file (I'm guessing you use the
NO_TRUNCATE option because you wanted your trx log backups to always contain
all log data since the last full backup). Did a quick test to confirm this.
I guess you now have a very large ldf file, and want to be able to recover
the data into the mdf file. No idea how this can be done, or even possible.
--
Peter Yeoh
http://www.yohz.com
Need smaller SQL2K backup files? Use MiniSQLBackup Lite, free!
"perry" <anonymous@.discussions.microsoft.com> wrote in message
news:7e8801c4957d$bc828ee0$a501280a@.phx.gbl...
>
> But what if I have a singel transaction log backup which I
> have set up to notrucate, waht is the possible data
> recovery i can have......will all trasactions prior to
> the last full database backup up to the trasaction backup
> (which is run daily and overwrites it in a single file)
> will be recovered...thanks for the post..your a great
> help...
>
> >--Original Message--
> >No. It's not possible nor logical to just attach the trx
> log file to an
> >existing database. If you have made trx log backups
> consistently since the
> >last full backup, you might have a chance of restoring
> your database to its
> >previous state prior to the corruption.
> >
> >--
> >Peter Yeoh
> >http://www.yohz.com
> >Need smaller SQL2K backup files? Use MiniSQLBackup Lite,
> free!
> >
> >
> >"perry" <pramos@.codex-systems.com> wrote in message
> >news:7c8b01c4956b$985122f0$a601280a@.phx.gbl...
> >>
> >> hi,
> >> I have a sql where data files and transaction files are
> >> stored seperatedly... and one time a problem occured
> when
> >> data files become corrupted and the only thing left is
> the
> >> transaction file.... and the databases went offline...
> my
> >> question is can i restore from a full db backup and then
> >> attach the existing transaction files...?
> >
> >
> >.
> >|||Peter,
Interesting. Just so I understand you correctly, a scenario:
0. BACKUP LOG
1. BACKUP DB
2. BACKUP LOG NO_TRUNCATE
3. BACKUP LOG NO_TRUNCATE
4. BACKUP LOG NO_TRUNCATE
Status now is:
The ldf file contains all log records since 0.
The backup file produced in step 4 contains the log records since 3.
Is above what you test showed? If so, then my gut feeling about NO_TRUNCATE is correct and the name
of the parameter is misleading (something I've been trying to communicate to MS with little
success). The name of the option implies that you can keep doing log backups and only apply the
latest. I've always explained this option from what it is documented as in Books Online, to backup
log of a broken database...
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Peter Yeoh" <nospam@.nospam.com> wrote in message news:ePpBp9YlEHA.592@.TK2MSFTNGP11.phx.gbl...
> Like Tibor, I'm assuming you have consistently backed up the log with the
> NO_TRUNCATE option (btw, strange thing to do). However, each of the trx log
> backup will only contain the log data since the last log backup. It will
> not store whatever log data is in the log file (I'm guessing you use the
> NO_TRUNCATE option because you wanted your trx log backups to always contain
> all log data since the last full backup). Did a quick test to confirm this.
> I guess you now have a very large ldf file, and want to be able to recover
> the data into the mdf file. No idea how this can be done, or even possible.
> --
> Peter Yeoh
> http://www.yohz.com
> Need smaller SQL2K backup files? Use MiniSQLBackup Lite, free!
>
> "perry" <anonymous@.discussions.microsoft.com> wrote in message
> news:7e8801c4957d$bc828ee0$a501280a@.phx.gbl...
>>
>> But what if I have a singel transaction log backup which I
>> have set up to notrucate, waht is the possible data
>> recovery i can have......will all trasactions prior to
>> the last full database backup up to the trasaction backup
>> (which is run daily and overwrites it in a single file)
>> will be recovered...thanks for the post..your a great
>> help...
>>
>> >--Original Message--
>> >No. It's not possible nor logical to just attach the trx
>> log file to an
>> >existing database. If you have made trx log backups
>> consistently since the
>> >last full backup, you might have a chance of restoring
>> your database to its
>> >previous state prior to the corruption.
>> >
>> >--
>> >Peter Yeoh
>> >http://www.yohz.com
>> >Need smaller SQL2K backup files? Use MiniSQLBackup Lite,
>> free!
>> >
>> >
>> >"perry" <pramos@.codex-systems.com> wrote in message
>> >news:7c8b01c4956b$985122f0$a601280a@.phx.gbl...
>> >>
>> >> hi,
>> >> I have a sql where data files and transaction files are
>> >> stored seperatedly... and one time a problem occured
>> when
>> >> data files become corrupted and the only thing left is
>> the
>> >> transaction file.... and the databases went offline...
>> my
>> >> question is can i restore from a full db backup and then
>> >> attach the existing transaction files...?
>> >
>> >
>> >.
>> >
>|||> Status now is:
> The ldf file contains all log records since 0.
> The backup file produced in step 4 contains the log records since 3.
Right on. The backup file size in step 4 is not the cumulative size of the
files produced in step 2 and 3.
Peter Yeoh
http://www.yohz.com
Need smaller SQL2K backup files? Use MiniSQLBackup Lite, free!
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in
message news:%23GGEXMZlEHA.2948@.TK2MSFTNGP11.phx.gbl...
> Peter,
> Interesting. Just so I understand you correctly, a scenario:
> 0. BACKUP LOG
> 1. BACKUP DB
> 2. BACKUP LOG NO_TRUNCATE
> 3. BACKUP LOG NO_TRUNCATE
> 4. BACKUP LOG NO_TRUNCATE
> Status now is:
> The ldf file contains all log records since 0.
> The backup file produced in step 4 contains the log records since 3.
> Is above what you test showed? If so, then my gut feeling about
NO_TRUNCATE is correct and the name
> of the parameter is misleading (something I've been trying to communicate
to MS with little
> success). The name of the option implies that you can keep doing log
backups and only apply the
> latest. I've always explained this option from what it is documented as in
Books Online, to backup
> log of a broken database...
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "Peter Yeoh" <nospam@.nospam.com> wrote in message
news:ePpBp9YlEHA.592@.TK2MSFTNGP11.phx.gbl...
> > Like Tibor, I'm assuming you have consistently backed up the log with
the
> > NO_TRUNCATE option (btw, strange thing to do). However, each of the trx
log
> > backup will only contain the log data since the last log backup. It
will
> > not store whatever log data is in the log file (I'm guessing you use the
> > NO_TRUNCATE option because you wanted your trx log backups to always
contain
> > all log data since the last full backup). Did a quick test to confirm
this.
> >
> > I guess you now have a very large ldf file, and want to be able to
recover
> > the data into the mdf file. No idea how this can be done, or even
possible.
> >
> > --
> > Peter Yeoh
> > http://www.yohz.com
> > Need smaller SQL2K backup files? Use MiniSQLBackup Lite, free!
> >
> >
> > "perry" <anonymous@.discussions.microsoft.com> wrote in message
> > news:7e8801c4957d$bc828ee0$a501280a@.phx.gbl...
> >>
> >>
> >> But what if I have a singel transaction log backup which I
> >> have set up to notrucate, waht is the possible data
> >> recovery i can have......will all trasactions prior to
> >> the last full database backup up to the trasaction backup
> >> (which is run daily and overwrites it in a single file)
> >> will be recovered...thanks for the post..your a great
> >> help...
> >>
> >>
> >> >--Original Message--
> >> >No. It's not possible nor logical to just attach the trx
> >> log file to an
> >> >existing database. If you have made trx log backups
> >> consistently since the
> >> >last full backup, you might have a chance of restoring
> >> your database to its
> >> >previous state prior to the corruption.
> >> >
> >> >--
> >> >Peter Yeoh
> >> >http://www.yohz.com
> >> >Need smaller SQL2K backup files? Use MiniSQLBackup Lite,
> >> free!
> >> >
> >> >
> >> >"perry" <pramos@.codex-systems.com> wrote in message
> >> >news:7c8b01c4956b$985122f0$a601280a@.phx.gbl...
> >> >>
> >> >> hi,
> >> >> I have a sql where data files and transaction files are
> >> >> stored seperatedly... and one time a problem occured
> >> when
> >> >> data files become corrupted and the only thing left is
> >> the
> >> >> transaction file.... and the databases went offline...
> >> my
> >> >> question is can i restore from a full db backup and then
> >> >> attach the existing transaction files...?
> >> >
> >> >
> >> >.
> >> >
> >
> >
>|||> Right on. The backup file size in step 4 is not the cumulative size of the
> files produced in step 2 and 3.
Indeed! I just wrote an article on the topic, with some SQL code proving the point...
http://www.karaszi.com/SQLServer/info_restore_no_truncate.asp
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Peter Yeoh" <nospam@.nospam.com> wrote in message news:e$ni%23PZlEHA.2224@.tk2msftngp13.phx.gbl...
>> Status now is:
>> The ldf file contains all log records since 0.
>> The backup file produced in step 4 contains the log records since 3.
> Right on. The backup file size in step 4 is not the cumulative size of the
> files produced in step 2 and 3.
> Peter Yeoh
> http://www.yohz.com
> Need smaller SQL2K backup files? Use MiniSQLBackup Lite, free!
>
> "Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in
> message news:%23GGEXMZlEHA.2948@.TK2MSFTNGP11.phx.gbl...
>> Peter,
>> Interesting. Just so I understand you correctly, a scenario:
>> 0. BACKUP LOG
>> 1. BACKUP DB
>> 2. BACKUP LOG NO_TRUNCATE
>> 3. BACKUP LOG NO_TRUNCATE
>> 4. BACKUP LOG NO_TRUNCATE
>> Status now is:
>> The ldf file contains all log records since 0.
>> The backup file produced in step 4 contains the log records since 3.
>> Is above what you test showed? If so, then my gut feeling about
> NO_TRUNCATE is correct and the name
>> of the parameter is misleading (something I've been trying to communicate
> to MS with little
>> success). The name of the option implies that you can keep doing log
> backups and only apply the
>> latest. I've always explained this option from what it is documented as in
> Books Online, to backup
>> log of a broken database...
>> --
>> Tibor Karaszi, SQL Server MVP
>> http://www.karaszi.com/sqlserver/default.asp
>> http://www.solidqualitylearning.com/
>>
>> "Peter Yeoh" <nospam@.nospam.com> wrote in message
> news:ePpBp9YlEHA.592@.TK2MSFTNGP11.phx.gbl...
>> > Like Tibor, I'm assuming you have consistently backed up the log with
> the
>> > NO_TRUNCATE option (btw, strange thing to do). However, each of the trx
> log
>> > backup will only contain the log data since the last log backup. It
> will
>> > not store whatever log data is in the log file (I'm guessing you use the
>> > NO_TRUNCATE option because you wanted your trx log backups to always
> contain
>> > all log data since the last full backup). Did a quick test to confirm
> this.
>> >
>> > I guess you now have a very large ldf file, and want to be able to
> recover
>> > the data into the mdf file. No idea how this can be done, or even
> possible.
>> >
>> > --
>> > Peter Yeoh
>> > http://www.yohz.com
>> > Need smaller SQL2K backup files? Use MiniSQLBackup Lite, free!
>> >
>> >
>> > "perry" <anonymous@.discussions.microsoft.com> wrote in message
>> > news:7e8801c4957d$bc828ee0$a501280a@.phx.gbl...
>> >>
>> >>
>> >> But what if I have a singel transaction log backup which I
>> >> have set up to notrucate, waht is the possible data
>> >> recovery i can have......will all trasactions prior to
>> >> the last full database backup up to the trasaction backup
>> >> (which is run daily and overwrites it in a single file)
>> >> will be recovered...thanks for the post..your a great
>> >> help...
>> >>
>> >>
>> >> >--Original Message--
>> >> >No. It's not possible nor logical to just attach the trx
>> >> log file to an
>> >> >existing database. If you have made trx log backups
>> >> consistently since the
>> >> >last full backup, you might have a chance of restoring
>> >> your database to its
>> >> >previous state prior to the corruption.
>> >> >
>> >> >--
>> >> >Peter Yeoh
>> >> >http://www.yohz.com
>> >> >Need smaller SQL2K backup files? Use MiniSQLBackup Lite,
>> >> free!
>> >> >
>> >> >
>> >> >"perry" <pramos@.codex-systems.com> wrote in message
>> >> >news:7c8b01c4956b$985122f0$a601280a@.phx.gbl...
>> >> >>
>> >> >> hi,
>> >> >> I have a sql where data files and transaction files are
>> >> >> stored seperatedly... and one time a problem occured
>> >> when
>> >> >> data files become corrupted and the only thing left is
>> >> the
>> >> >> transaction file.... and the databases went offline...
>> >> my
>> >> >> question is can i restore from a full db backup and then
>> >> >> attach the existing transaction files...?
>> >> >
>> >> >
>> >> >.
>> >> >
>> >
>> >
>>
>
DB Restore Time
this to another server. It is about 75 GB. Is there any
way to find out how much of it has been restored through
SQL Server or any other tool ?
Thanks.
If you are using STATS, you can see the progress...
Thanks
GYK
"Ron" wrote:
> I have a database backup on one server and I am restoring
> this to another server. It is about 75 GB. Is there any
> way to find out how much of it has been restored through
> SQL Server or any other tool ?
> Thanks.
>
|||I am running this through DTS package (then, as a
scheduled job).
[vbcol=seagreen]
>--Original Message--
>If you are using STATS, you can see the progress...
>Thanks
>GYK
>"Ron" wrote:
restoring[vbcol=seagreen]
through
>.
>
|||Ron,
I think what CYK is saying is to run something like this:
RESTORE DATABASE MY_DB FROM DISK = 'C:\PATH\MY_DB.BAK' WITH STATS=10
The value for STATS can be any integer between 1 and 100. See Books Online
subject 'RESTORE' under 'Transact-SQL Reference' for more details.
HTH
"Ron" wrote:
> I am running this through DTS package (then, as a
> scheduled job).
>
> restoring
> through
>
|||I understand that but how am I going to view this ?
The script is in the DTS package. I can add that statement
into the DTS package. DTS package is scheduled to run as a
job. May be on the "status" column when executing DTS
package ?
My question is either executing the DTS package or just
running the job, how am I going to see this visually ?
I have test it in the Query analyzer and works. I can see
the progress in "Messages" window.
Thanks.
>--Original Message--
>Ron,
>I think what CYK is saying is to run something like this:
>RESTORE DATABASE MY_DB FROM DISK = 'C:\PATH\MY_DB.BAK'
WITH STATS=10
>The value for STATS can be any integer between 1 and
100. See Books Online
>subject 'RESTORE' under 'Transact-SQL Reference' for more
details.[vbcol=seagreen]
>HTH
>"Ron" wrote:
any
>.
>
|||Run the RESTORE DATABASE command from Query Analyzer instead of a DTS
package. DTS won't offer an 'interactive' response to the query like QA will.
"Ron" wrote:
> I understand that but how am I going to view this ?
> The script is in the DTS package. I can add that statement
> into the DTS package. DTS package is scheduled to run as a
> job. May be on the "status" column when executing DTS
> package ?
> My question is either executing the DTS package or just
> running the job, how am I going to see this visually ?
> I have test it in the Query analyzer and works. I can see
> the progress in "Messages" window.
> Thanks.
>
> WITH STATS=10
> 100. See Books Online
> details.
> any
>
|||"Mike" <mstuart_spamtastesgood@.gates.com> wrote in message
news:CBDF6AC8-3854-40BD-B7A0-33A36685D6D7@.microsoft.com...
> Run the RESTORE DATABASE command from Query Analyzer instead of a DTS
> package. DTS won't offer an 'interactive' response to the query like QA
will.
>
Right, but it's tough to automate that.
I believe Ron's asking for automatic jobs.
I do this as a routine thing (backup from box 1 and restore to box 2.)
DB Restore Time
this to another server. It is about 75 GB. Is there any
way to find out how much of it has been restored through
SQL Server or any other tool '
Thanks.If you are using STATS, you can see the progress...
Thanks
GYK
"Ron" wrote:
> I have a database backup on one server and I am restoring
> this to another server. It is about 75 GB. Is there any
> way to find out how much of it has been restored through
> SQL Server or any other tool '
> Thanks.
>|||I am running this through DTS package (then, as a
scheduled job).
>--Original Message--
>If you are using STATS, you can see the progress...
>Thanks
>GYK
>"Ron" wrote:
>
restoring[vbcol=seagreen]
through[vbcol=seagreen]
>.
>|||Ron,
I think what CYK is saying is to run something like this:
RESTORE DATABASE MY_DB FROM DISK = 'C:\PATH\MY_DB.BAK' WITH STATS=10
The value for STATS can be any integer between 1 and 100. See Books Online
subject 'RESTORE' under 'Transact-SQL Reference' for more details.
HTH
"Ron" wrote:
> I am running this through DTS package (then, as a
> scheduled job).
>
>
> restoring
> through
>|||I understand that but how am I going to view this ?
The script is in the DTS package. I can add that statement
into the DTS package. DTS package is scheduled to run as a
job. May be on the "status" column when executing DTS
package '
My question is either executing the DTS package or just
running the job, how am I going to see this visually ?
I have test it in the Query analyzer and works. I can see
the progress in "Messages" window.
Thanks.
>--Original Message--
>Ron,
>I think what CYK is saying is to run something like this:
>RESTORE DATABASE MY_DB FROM DISK = 'C:\PATH\MY_DB.BAK'
WITH STATS=10
>The value for STATS can be any integer between 1 and
100. See Books Online
>subject 'RESTORE' under 'Transact-SQL Reference' for more
details.
>HTH
>"Ron" wrote:
>
any[vbcol=seagreen]
>.
>|||Run the RESTORE DATABASE command from Query Analyzer instead of a DTS
package. DTS won't offer an 'interactive' response to the query like QA wil
l.
"Ron" wrote:
> I understand that but how am I going to view this ?
> The script is in the DTS package. I can add that statement
> into the DTS package. DTS package is scheduled to run as a
> job. May be on the "status" column when executing DTS
> package '
> My question is either executing the DTS package or just
> running the job, how am I going to see this visually ?
> I have test it in the Query analyzer and works. I can see
> the progress in "Messages" window.
> Thanks.
>
> WITH STATS=10
> 100. See Books Online
> details.
> any
>|||"Mike" <mstuart_spamtastesgood@.gates.com> wrote in message
news:CBDF6AC8-3854-40BD-B7A0-33A36685D6D7@.microsoft.com...
> Run the RESTORE DATABASE command from Query Analyzer instead of a DTS
> package. DTS won't offer an 'interactive' response to the query like QA
will.
>
Right, but it's tough to automate that.
I believe Ron's asking for automatic jobs.
I do this as a routine thing (backup from box 1 and restore to box 2.)
DB Restore Time
this to another server. It is about 75 GB. Is there any
way to find out how much of it has been restored through
SQL Server or any other tool '
Thanks.If you are using STATS, you can see the progress...
Thanks
GYK
"Ron" wrote:
> I have a database backup on one server and I am restoring
> this to another server. It is about 75 GB. Is there any
> way to find out how much of it has been restored through
> SQL Server or any other tool '
> Thanks.
>|||I am running this through DTS package (then, as a
scheduled job).
>--Original Message--
>If you are using STATS, you can see the progress...
>Thanks
>GYK
>"Ron" wrote:
>> I have a database backup on one server and I am
restoring
>> this to another server. It is about 75 GB. Is there any
>> way to find out how much of it has been restored
through
>> SQL Server or any other tool '
>> Thanks.
>.
>|||Ron,
I think what CYK is saying is to run something like this:
RESTORE DATABASE MY_DB FROM DISK = 'C:\PATH\MY_DB.BAK' WITH STATS=10
The value for STATS can be any integer between 1 and 100. See Books Online
subject 'RESTORE' under 'Transact-SQL Reference' for more details.
HTH
"Ron" wrote:
> I am running this through DTS package (then, as a
> scheduled job).
>
> >--Original Message--
> >If you are using STATS, you can see the progress...
> >
> >Thanks
> >GYK
> >
> >"Ron" wrote:
> >
> >> I have a database backup on one server and I am
> restoring
> >> this to another server. It is about 75 GB. Is there any
> >> way to find out how much of it has been restored
> through
> >> SQL Server or any other tool '
> >>
> >> Thanks.
> >>
> >.
> >
>|||I understand that but how am I going to view this ?
The script is in the DTS package. I can add that statement
into the DTS package. DTS package is scheduled to run as a
job. May be on the "status" column when executing DTS
package '
My question is either executing the DTS package or just
running the job, how am I going to see this visually ?
I have test it in the Query analyzer and works. I can see
the progress in "Messages" window.
Thanks.
>--Original Message--
>Ron,
>I think what CYK is saying is to run something like this:
>RESTORE DATABASE MY_DB FROM DISK = 'C:\PATH\MY_DB.BAK'
WITH STATS=10
>The value for STATS can be any integer between 1 and
100. See Books Online
>subject 'RESTORE' under 'Transact-SQL Reference' for more
details.
>HTH
>"Ron" wrote:
>> I am running this through DTS package (then, as a
>> scheduled job).
>>
>> >--Original Message--
>> >If you are using STATS, you can see the progress...
>> >
>> >Thanks
>> >GYK
>> >
>> >"Ron" wrote:
>> >
>> >> I have a database backup on one server and I am
>> restoring
>> >> this to another server. It is about 75 GB. Is there
any
>> >> way to find out how much of it has been restored
>> through
>> >> SQL Server or any other tool '
>> >>
>> >> Thanks.
>> >>
>> >.
>> >
>.
>|||Run the RESTORE DATABASE command from Query Analyzer instead of a DTS
package. DTS won't offer an 'interactive' response to the query like QA will.
"Ron" wrote:
> I understand that but how am I going to view this ?
> The script is in the DTS package. I can add that statement
> into the DTS package. DTS package is scheduled to run as a
> job. May be on the "status" column when executing DTS
> package '
> My question is either executing the DTS package or just
> running the job, how am I going to see this visually ?
> I have test it in the Query analyzer and works. I can see
> the progress in "Messages" window.
> Thanks.
>
> >--Original Message--
> >Ron,
> >
> >I think what CYK is saying is to run something like this:
> >
> >RESTORE DATABASE MY_DB FROM DISK = 'C:\PATH\MY_DB.BAK'
> WITH STATS=10
> >
> >The value for STATS can be any integer between 1 and
> 100. See Books Online
> >subject 'RESTORE' under 'Transact-SQL Reference' for more
> details.
> >
> >HTH
> >
> >"Ron" wrote:
> >
> >> I am running this through DTS package (then, as a
> >> scheduled job).
> >>
> >>
> >>
> >> >--Original Message--
> >> >If you are using STATS, you can see the progress...
> >> >
> >> >Thanks
> >> >GYK
> >> >
> >> >"Ron" wrote:
> >> >
> >> >> I have a database backup on one server and I am
> >> restoring
> >> >> this to another server. It is about 75 GB. Is there
> any
> >> >> way to find out how much of it has been restored
> >> through
> >> >> SQL Server or any other tool '
> >> >>
> >> >> Thanks.
> >> >>
> >> >.
> >> >
> >>
> >.
> >
>|||"Mike" <mstuart_spamtastesgood@.gates.com> wrote in message
news:CBDF6AC8-3854-40BD-B7A0-33A36685D6D7@.microsoft.com...
> Run the RESTORE DATABASE command from Query Analyzer instead of a DTS
> package. DTS won't offer an 'interactive' response to the query like QA
will.
>
Right, but it's tough to automate that.
I believe Ron's asking for automatic jobs.
I do this as a routine thing (backup from box 1 and restore to box 2.)
DB restore puzzle
I like to write a program to automate the following time
consuming and tedious task:
A database was full backed up and got pruned to keep one
week worth of data daily. Now I need to build an archive
type database using all the backup files I have since the
beginning of last year. This archive database contains all
the history data, just lile the original database never
gets pruned. How to do it?
My current plan is to:
1. loop through the backup files
2. For each back up file, restore it using "restore DB"
command
3, After each database is restored, load the data into my
archive database
4. loop to next backup file
5. When all the back files have been processed, remove the
duplicate from archive database.
Although I have not tried, I can expect one problem I am
going to run into is that I can not restore the database
in loop because the prior restored database "is in use".
Does anyone have any suggestion?
Thanks a thousand times in advance,
New Bee> Although I have not tried, I can expect one problem I am
> going to run into is that I can not restore the database
> in loop because the prior restored database "is in use".
If you don't have a connection yourself in the db and no users has connections to the db, then you
don't have to worry. If you are worried about users having connections, you can use ALTER DATABASE
to set to single user and the ROLLBACK option to kick out current users.
--
Tibor Karaszi, SQL Server MVP
Archive at: http://groups.google.com/groups?oi=djq&as_ugroup=microsoft.public.sqlserver
"New Bee" <anonymous@.discussions.microsoft.com> wrote in message
news:0beb01c397da$ac242320$a301280a@.phx.gbl...
> Hi friends,
> I like to write a program to automate the following time
> consuming and tedious task:
> A database was full backed up and got pruned to keep one
> week worth of data daily. Now I need to build an archive
> type database using all the backup files I have since the
> beginning of last year. This archive database contains all
> the history data, just lile the original database never
> gets pruned. How to do it?
> My current plan is to:
> 1. loop through the backup files
> 2. For each back up file, restore it using "restore DB"
> command
> 3, After each database is restored, load the data into my
> archive database
> 4. loop to next backup file
> 5. When all the back files have been processed, remove the
> duplicate from archive database.
> Although I have not tried, I can expect one problem I am
> going to run into is that I can not restore the database
> in loop because the prior restored database "is in use".
> Does anyone have any suggestion?
> Thanks a thousand times in advance,
> New Bee
>
DB Restore Fiasco
months in the past), but this time I got an error, and the database is marked
as LOADING and is not accessible. Help!
We have about 2 gigs free on the server, and our largest backup file/log is
211 mb.
How do I get my database out of (Loading) status, and restore my data?
The backup T-SQL is:
RESTORE DATABASE [dbname]
FROM DISK = 'e:\mssql7\backup\[filepath].bak'
WITH recovery,
MOVE 'wtc_data' TO 'e:\mssql7\data\[filename].mdf',
MOVE 'wtc_log' TO 'e:\mssql7\log\[filename].ldf'
The Error message is:
Server: Msg 3241, Level 16, State 37, Line 11
The media family on device 'e:\mssql7\backup\[filepath].bak' is incorrectly
formed. SQL Server cannot process this media family.
Server: Msg 3013, Level 16, State 1, Line 11
RESTORE DATABASE is terminating abnormally.
Is this SQL 7? If so, I believe that you would get these messages if you
were running out of space on the disk.
"Michael Schreiber" <Michael Schreiber@.discussions.microsoft.com> wrote in
message news:929B70D5-5B15-41EB-82D6-E84629DF8B77@.microsoft.com...
>I attempted to restore a database from backup (the same way I've done for
> months in the past), but this time I got an error, and the database is
> marked
> as LOADING and is not accessible. Help!
> We have about 2 gigs free on the server, and our largest backup file/log
> is
> 211 mb.
> How do I get my database out of (Loading) status, and restore my data?
> The backup T-SQL is:
> RESTORE DATABASE [dbname]
> FROM DISK = 'e:\mssql7\backup\[filepath].bak'
> WITH recovery,
> MOVE 'wtc_data' TO 'e:\mssql7\data\[filename].mdf',
> MOVE 'wtc_log' TO 'e:\mssql7\log\[filename].ldf'
> The Error message is:
> Server: Msg 3241, Level 16, State 37, Line 11
> The media family on device 'e:\mssql7\backup\[filepath].bak' is
> incorrectly
> formed. SQL Server cannot process this media family.
> Server: Msg 3013, Level 16, State 1, Line 11
> RESTORE DATABASE is terminating abnormally.
>
DB Restore
HI, I am restoring a DB .BAK file which is around 1.8 GBs. It is taking a lot of time to complete. Is this expected?
Thanks a lot.
It all depends on your hardware components and system configuration.......generally speaking it should not take more than 10 mins for 2 GB backup files..........also check if the backup file is valid by using,
restore verifyonly from disk='path of the backup file'
Also I think you should have posted this in database engine forum as that seems to be more appropriate for your question......
|||Also, it may be that your data and log files are much larger than 1.8GB and its taking a long time to create these files.
The database backup file will be the size of the actual data stored whereas the files you restore will have the blank pages allocated as well which could make them much larger.
This is quite a common scenario with the transaction log when it can grow quite large over time but is never physically truncated.
HTH!
Thursday, March 8, 2012
DB optimization failure
database during optimization time set up by the
Maintenance Plan:
[Microsoft SQL-DMO (ODBC SQLState: 42000)] Error 1934:
[Microsoft][ODBC SQL Server Driver][SQL Server]DBCC failed
because the following SET options have incorrect
settings: 'QUOTED_IDENTIFIER, ARITHABORT'
Any suggestions please?
Thanks.
Sang NahmThe maintenance plan has troubles with certain objects such as computed
columns or indexed views. I suggest you create your own scheduled job to do
the reindexing and then you can set those settings as needed.
--
Andrew J. Kelly
SQL Server MVP
"sang.nahm@.ots.treas.gov" <anonymous@.discussions.microsoft.com> wrote in
message news:095e01c3c8b0$97f28ac0$a501280a@.phx.gbl...
> I received the following error while indexing entire
> database during optimization time set up by the
> Maintenance Plan:
> [Microsoft SQL-DMO (ODBC SQLState: 42000)] Error 1934:
> [Microsoft][ODBC SQL Server Driver][SQL Server]DBCC failed
> because the following SET options have incorrect
> settings: 'QUOTED_IDENTIFIER, ARITHABORT'
> Any suggestions please?
> Thanks.
> Sang Nahm
DB Optimization
I scheduled a Optimization job to run weekly once and its taking around 5
hrs .During this time users are getting locked.
What are all the best options to handle this '
We are not able to get a continuous 5 hrs down time for application '
Thx
ShDo you mean index rebuilds?
Index rebuilds acquire locks on the table (exclusive for clustered =indexes and shared for non-clustered); they do require downtime.
SQL Server 7.0 doesn't have an online index defrag option such as the =one provided by SQL Server 2000 (DBCC INDEXDEFRAG).
-- BG, SQL Server MVP
Solid Quality Learning
www.solidqualitylearning.com
"Shamim" <shamim.abdul@.railamerica.com> wrote in message =news:OsxzOzSVDHA.1832@.TK2MSFTNGP09.phx.gbl...
> SQL 7.0
> > I scheduled a Optimization job to run weekly once and its taking =around 5
> hrs .During this time users are getting locked.
> What are all the best options to handle this '
> > We are not able to get a continuous 5 hrs down time for application '
> > Thx
> Sh
> >|||Itzik
An alternative is to use the DBCC command SHOWCONTIG to
get an idea of the fragmentation of your files. It is very
likely that they fragment at different rates. When you get
a feel for how quickly your different tables fragment you
may be able to schedule your index rebuilds in a way that
fits into your available window.
Regards
John|||Thanks Itzik for the reply.
Just wanna know , how a production critical environment handle this issue'
Is it like, transfering application to a hot backup , optimize production db
, restore the downtime transaction log and connect back.
Thx
Sh
"Itzik Ben-Gan" <itzik@.REMOVETHIS.solidqualitylearning.com> wrote in message
news:eVN192SVDHA.1928@.TK2MSFTNGP12.phx.gbl...
Do you mean index rebuilds?
Index rebuilds acquire locks on the table (exclusive for clustered indexes
and shared for non-clustered); they do require downtime.
SQL Server 7.0 doesn't have an online index defrag option such as the one
provided by SQL Server 2000 (DBCC INDEXDEFRAG).
--
BG, SQL Server MVP
Solid Quality Learning
www.solidqualitylearning.com
"Shamim" <shamim.abdul@.railamerica.com> wrote in message
news:OsxzOzSVDHA.1832@.TK2MSFTNGP09.phx.gbl...
> SQL 7.0
> I scheduled a Optimization job to run weekly once and its taking around 5
> hrs .During this time users are getting locked.
> What are all the best options to handle this '
> We are not able to get a continuous 5 hrs down time for application '
> Thx
> Sh
>
Saturday, February 25, 2012
DB Maintenance Planner - Jobs not running
Planner - each job has a daily time to run.
All the jobs ran last night EXCEPT:
MSDB db trans log backup / MSDB db Integrity Checks
Master db trans log backup / Master db Integrity Checks
I am assuming that it is not necessary to run these jobs and I should delete
them. Am I right?
Thanks in advance.
bill
Hi,
I agree; Since there is no much data changes, it is not required to perform
transaction log backup on MSDB and Master databases.
Thanks
Hari
"WCM" <WCM@.discussions.microsoft.com> wrote in message
news:CC358189-EEC8-4FF3-91B0-F91972909ABE@.microsoft.com...
>I set up 12 DB maintenance jobs on 3 separate DB's through the DB
>Maintenance
> Planner - each job has a daily time to run.
> All the jobs ran last night EXCEPT:
> MSDB db trans log backup / MSDB db Integrity Checks
> Master db trans log backup / Master db Integrity Checks
> I am assuming that it is not necessary to run these jobs and I should
> delete
> them. Am I right?
> Thanks in advance.
> bill
|||Hari, thank you for the confirmation. b.
"Hari Prasad" wrote:
> Hi,
> I agree; Since there is no much data changes, it is not required to perform
> transaction log backup on MSDB and Master databases.
> Thanks
> Hari
> "WCM" <WCM@.discussions.microsoft.com> wrote in message
> news:CC358189-EEC8-4FF3-91B0-F91972909ABE@.microsoft.com...
>
>
|||Thank you, Tibor. I was concerned about those particular job failures. I
will set them up again and see if they work. If not, I am told that setting
up a new user (same username in both Windows and Sql Server) with permission
to run jobs may be necessary. Thanks again.
"Tibor Karaszi" wrote:
> But you definitely want to do integrity checks on these databases... :-)
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "Hari Prasad" <hari_prasad_k@.hotmail.com> wrote in message
> news:%233r43%23CFHHA.4580@.TK2MSFTNGP05.phx.gbl...
>
Sunday, February 19, 2012
DB locked up - mid restore
SSMS). It claimed to work successfully, but now the DB is offline, and is
marked as "(loading)".
I cant detach it, restore it, delete it...I can't do anything with it at
all.
Any ideas?
Thanks in advance
ChrisChris,
The last backup you restored did you check on 'Leave database
operational. No additional transaction log can be restored'? If you
missed that one, just restored the last transaction log again with that
option on.
Mel|||"MSLam" <MelodySLam@.googlemail.com> wrote in message
news:1144929164.043122.228370@.e56g2000cwe.googlegroups.com...
> Chris,
> The last backup you restored did you check on 'Leave database
> operational. No additional transaction log can be restored'? If you
> missed that one, just restored the last transaction log again with that
> option on.
>
No I didn't forget...
Nor can I restore again, delete, nor detach...|||... try to restart the MSSQL instance and see if the auto-recovery can
pick the database up. If still cannot, time to do re-create the
database with the latest backup you have.
Mel|||"MSLam" <MelodySLam@.googlemail.com> wrote in message
news:1144943087.458813.10980@.t31g2000cwb.googlegroups.com...
> ... try to restart the MSSQL instance and see if the auto-recovery can
> pick the database up. If still cannot, time to do re-create the
> database with the latest backup you have.
> Mel
>
At one point, the only option I thought I had was to restore from last
nights backup..
However, I eventually managed to release by using 'Restore... With Recovery'
in QA.
DB locked up - mid restore
SSMS). It claimed to work successfully, but now the DB is offline, and is
marked as "(loading)".
I cant detach it, restore it, delete it...I can't do anything with it at
all.
Any ideas?
Thanks in advance
ChrisChris,
The last backup you restored did you check on 'Leave database
operational. No additional transaction log can be restored'? If you
missed that one, just restored the last transaction log again with that
option on.
Mel|||"MSLam" <MelodySLam@.googlemail.com> wrote in message
news:1144929164.043122.228370@.e56g2000cwe.googlegroups.com...
> Chris,
> The last backup you restored did you check on 'Leave database
> operational. No additional transaction log can be restored'? If you
> missed that one, just restored the last transaction log again with that
> option on.
>
No I didn't forget...
Nor can I restore again, delete, nor detach...|||... try to restart the MSSQL instance and see if the auto-recovery can
pick the database up. If still cannot, time to do re-create the
database with the latest backup you have.
Mel|||"MSLam" <MelodySLam@.googlemail.com> wrote in message
news:1144943087.458813.10980@.t31g2000cwb.googlegroups.com...
> ... try to restart the MSSQL instance and see if the auto-recovery can
> pick the database up. If still cannot, time to do re-create the
> database with the latest backup you have.
> Mel
>
At one point, the only option I thought I had was to restore from last
nights backup..
However, I eventually managed to release by using 'Restore... With Recovery'
in QA.
Friday, February 17, 2012
DB Last Access Times
Would it be safe to use the last modified date of a .MDF file within Windows
Explorer to view the last time anyone utilised a database? I'm trying to
clear unused DB's from a server and am wondering if this is a reliable
method of finding unused data?
Many thanks in advance,
Orb.Hi
I'd run SQL Server Profiler to collect the info per database and then make
a conclusions. BTW , what version are you using?
"Orbital" <sian.clarke@.newhamhealth.nhs.uk> wrote in message
news:O60yZ6IXGHA.4148@.TK2MSFTNGP03.phx.gbl...
> Hi All,
> Would it be safe to use the last modified date of a .MDF file within
> Windows Explorer to view the last time anyone utilised a database? I'm
> trying to clear unused DB's from a server and am wondering if this is a
> reliable method of finding unused data?
>
> Many thanks in advance,
> Orb.
>|||Hi Uri,
Thanks for your rapid response!
I'm on SQL2000 right now.
The SQL Server Profiler you mention, is this an MS tool? If you would
kindly point me to any documentation/ resources I'd be really grateful!
Many Thanks in advance,
Orb.
"Uri Dimant" <urid@.iscar.co.il> wrote in message
news:ePtdY8IXGHA.2356@.TK2MSFTNGP03.phx.gbl...
> Hi
> I'd run SQL Server Profiler to collect the info per database and then
> make a conclusions. BTW , what version are you using?
>
>
> "Orbital" <sian.clarke@.newhamhealth.nhs.uk> wrote in message
> news:O60yZ6IXGHA.4148@.TK2MSFTNGP03.phx.gbl...
>|||> The SQL Server Profiler you mention, is this an MS tool?
Yes ,it is
> kindly point me to any documentation/ resources I'd be really grateful!
Actually , take a look at BOL for some explanations as well as visiting at
http://www.sql-server-performance.c...ofiler_tips.asp
"Orbital" <sian.clarke@.newhamhealth.nhs.uk> wrote in message
news:ecJRPAJXGHA.3760@.TK2MSFTNGP02.phx.gbl...
> Hi Uri,
> Thanks for your rapid response!
> I'm on SQL2000 right now.
> The SQL Server Profiler you mention, is this an MS tool? If you would
> kindly point me to any documentation/ resources I'd be really grateful!
>
> Many Thanks in advance,
> Orb.
> "Uri Dimant" <urid@.iscar.co.il> wrote in message
> news:ePtdY8IXGHA.2356@.TK2MSFTNGP03.phx.gbl...
>|||"Orbital" <sian.clarke@.newhamhealth.nhs.uk> wrote in message
news:O60yZ6IXGHA.4148@.TK2MSFTNGP03.phx.gbl...
> Hi All,
> Would it be safe to use the last modified date of a .MDF file within
Windows
> Explorer to view the last time anyone utilised a database? I'm trying to
> clear unused DB's from a server and am wondering if this is a reliable
> method of finding unused data?
No, absolutely not.
In general the last modified date only shows the last time the server
started up or shut down or the size changed.
While in use, it generally doesn't change.
Now, you can try renaming the file, and it will fail if in use, but that's
not an entirely safe way to do things.
>
> Many thanks in advance,
> Orb.
>|||Uri,
This is superb, thanks for making me aware of this utility.
Many Thanks,
Orb.
"Uri Dimant" <urid@.iscar.co.il> wrote in message
news:%23dgh4OJXGHA.4924@.TK2MSFTNGP05.phx.gbl...
> Yes ,it is
>
> Actually , take a look at BOL for some explanations as well as visiting
> at
> http://www.sql-server-performance.c...ofiler_tips.asp
>
>
>
> "Orbital" <sian.clarke@.newhamhealth.nhs.uk> wrote in message
> news:ecJRPAJXGHA.3760@.TK2MSFTNGP02.phx.gbl...
>|||Hello Orbital,
I had exactly the same issue -- multiple databases and many of them
suspected of no usage.
I use Profiler, and monitor locks on the suspected databases. The idea behid
this is actually every access to Db is some kind of lock.
"Greg D. Moore (Strider)" wrote:
> "Orbital" <sian.clarke@.newhamhealth.nhs.uk> wrote in message
> news:O60yZ6IXGHA.4148@.TK2MSFTNGP03.phx.gbl...
> Windows
> No, absolutely not.
> In general the last modified date only shows the last time the server
> started up or shut down or the size changed.
> While in use, it generally doesn't change.
> Now, you can try renaming the file, and it will fail if in use, but that's
> not an entirely safe way to do things.
>
>
>|||Thanks for your reply Strider. I've had a go at renaming what look to be
old databases - I think October 2003 datestamps should be safe!- and will
reinstate if anyone screams!
Thanks for you help,
Orb.
"Greg D. Moore (Strider)" <mooregr_deleteth1s@.greenms.com> wrote in message
news:%23p9L0iJXGHA.1192@.TK2MSFTNGP03.phx.gbl...
> "Orbital" <sian.clarke@.newhamhealth.nhs.uk> wrote in message
> news:O60yZ6IXGHA.4148@.TK2MSFTNGP03.phx.gbl...
> Windows
> No, absolutely not.
> In general the last modified date only shows the last time the server
> started up or shut down or the size changed.
> While in use, it generally doesn't change.
> Now, you can try renaming the file, and it will fail if in use, but that's
> not an entirely safe way to do things.
>
>|||Hi Gary,
Thanks for your reply. I've just ran a quick trace and this is just what
I'm looking for!. My only problem now is relating the DatabaseID to the
actual DB I see in my SQL Enterprise Manager, as the DatabaseName column
somehow remains empty? Does anyone know how I match the two together?
Many Thanks,
Orb.
"Gary" <Gary@.discussions.microsoft.com> wrote in message
news:EDF55C4E-9EDE-4D7E-86E2-8BFCD26806AE@.microsoft.com...[vbcol=seagreen]
> Hello Orbital,
> I had exactly the same issue -- multiple databases and many of them
> suspected of no usage.
> I use Profiler, and monitor locks on the suspected databases. The idea
> behid
> this is actually every access to Db is some kind of lock.
> "Greg D. Moore (Strider)" wrote:
>|||select name from master..sysdatabases where dbid = '
HTH. Ryan
"Orbital" <sian.clarke@.newhamhealth.nhs.uk> wrote in message
news:uicVQbKXGHA.5096@.TK2MSFTNGP03.phx.gbl...
> Hi Gary,
> Thanks for your reply. I've just ran a quick trace and this is just what
> I'm looking for!. My only problem now is relating the DatabaseID to the
> actual DB I see in my SQL Enterprise Manager, as the DatabaseName column
> somehow remains empty? Does anyone know how I match the two together?
>
> Many Thanks,
> Orb.
> "Gary" <Gary@.discussions.microsoft.com> wrote in message
> news:EDF55C4E-9EDE-4D7E-86E2-8BFCD26806AE@.microsoft.com...
>