Thursday, March 29, 2012
DBA guru advice - changes to production database
I need to find out how to be able to change a production database, i.e.,
adding a column to a table that will necessitate changes to multiple stored
procedures, without shutting down the application that depends on the
database (or at least shutting it down for the least amount of time
possible). How are 24 X 7 production database changes normally made?
Can anyone explain the procedure to me and/or provide links to reading
material that explains how to do this?
Any help will be greatly appreciated. I am a programmer and these are
entirely new waters for me!
Sandy
Stuff like this is usually done through a documented process. You develop
the scripts in a dev environment and then apply them on a copy of production
in a test environment. If you are adding a NOT NULL column to a production
table, this can take a very long time, if you have many millions of rows.
You may have to XXX the column as NULL and then populate the table
iteratively. This, too, should be tested on a copy of prod first.
Tom
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
SQL Server MVP
Toronto, ON Canada
..
"Sandy" <Sandy@.discussions.microsoft.com> wrote in message
news:19C65DEA-BF61-4B4D-88D9-67BA62C828A6@.microsoft.com...
Hello -
I need to find out how to be able to change a production database, i.e.,
adding a column to a table that will necessitate changes to multiple stored
procedures, without shutting down the application that depends on the
database (or at least shutting it down for the least amount of time
possible). How are 24 X 7 production database changes normally made?
Can anyone explain the procedure to me and/or provide links to reading
material that explains how to do this?
Any help will be greatly appreciated. I am a programmer and these are
entirely new waters for me!
Sandy
|||"Sandy" <Sandy@.discussions.microsoft.com> wrote in message
news:19C65DEA-BF61-4B4D-88D9-67BA62C828A6@.microsoft.com...
> Hello -
> I need to find out how to be able to change a production database, i.e.,
> adding a column to a table that will necessitate changes to multiple
> stored
> procedures, without shutting down the application that depends on the
> database (or at least shutting it down for the least amount of time
> possible). How are 24 X 7 production database changes normally made?
> Can anyone explain the procedure to me and/or provide links to reading
> material that explains how to do this?
> Any help will be greatly appreciated. I am a programmer and these are
> entirely new waters for me!
> --
> Sandy
|||Thanks for your response, Tom.
Is the copy of the production database in the test environment generally on
a different server?
How do I then update the production server with the changes, after I work
with them on the test server?
I don't understand how I can run Alter Procedure and Alter Table scripts on
a 24 X 7 production server without mega problems, unless I shut down the
server for at least the time it takes for the scripts to run. Can you
clarify a little bit?
Sandy
"Tom Moreau" wrote:
> Stuff like this is usually done through a documented process. You develop
> the scripts in a dev environment and then apply them on a copy of production
> in a test environment. If you are adding a NOT NULL column to a production
> table, this can take a very long time, if you have many millions of rows.
> You may have to XXX the column as NULL and then populate the table
> iteratively. This, too, should be tested on a copy of prod first.
> --
> Tom
> ----
> Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
> SQL Server MVP
> Toronto, ON Canada
> ..
> "Sandy" <Sandy@.discussions.microsoft.com> wrote in message
> news:19C65DEA-BF61-4B4D-88D9-67BA62C828A6@.microsoft.com...
> Hello -
> I need to find out how to be able to change a production database, i.e.,
> adding a column to a table that will necessitate changes to multiple stored
> procedures, without shutting down the application that depends on the
> database (or at least shutting it down for the least amount of time
> possible). How are 24 X 7 production database changes normally made?
> Can anyone explain the procedure to me and/or provide links to reading
> material that explains how to do this?
> Any help will be greatly appreciated. I am a programmer and these are
> entirely new waters for me!
> --
> Sandy
>
|||"Sandy" <Sandy@.discussions.microsoft.com> wrote in message
news:19C65DEA-BF61-4B4D-88D9-67BA62C828A6@.microsoft.com...
> Hello -
> I need to find out how to be able to change a production database, i.e.,
> adding a column to a table that will necessitate changes to multiple
> stored
> procedures, without shutting down the application that depends on the
> database (or at least shutting it down for the least amount of time
> possible). How are 24 X 7 production database changes normally made?
"It depends".
Partly on the scale of the database, the changes to be made, and the
resources available.
Ok, that doesn't help much I know.
Adding a column:
Depends on if the column has defaults or not. If you add a column with
defaults, then you're almost certainly going to lock the table for some
amount of time you may find unacceptable.
Now, if your stored procedures and queries are written correctly, adding a
column to a table should not be an issue. In other words you don't do
"select *" any place.
Now, for the stored procs, generally I've had "luck" doing what I needed in
a batch. Worse case though, have a script that sets the database to single
user mode, executes your script and sets it back. But watch out for errors.
It can be "bad" to have 1/2 your stored procs update, but not the other 1/2.
;-)
Another option is if you have a replicated copy or log-shipped copy of the
database you can use, is move traffic there, make changes and move back.
The problem then is handling transactions that occurred in the meantime.
> Can anyone explain the procedure to me and/or provide links to reading
> material that explains how to do this?
I don't unfortunately. In my last job I did this sort of thing all the
time, but also knew the system pretty well (but still messed up a few times
;-)
Contact me if you need more advice.
> Any help will be greatly appreciated. I am a programmer and these are
> entirely new waters for me!
> --
> Sandy
Greg Moore
SQL Server DBA Consulting Remote and Onsite available!
Email: sql (at) greenms.com http://www.greenms.com/sqlserver.html
|||"Tom Moreau" <tom@.dont.spam.me.cips.ca> wrote in message
news:OErxvtacHHA.4984@.TK2MSFTNGP05.phx.gbl...
> Stuff like this is usually done through a documented process. You develop
> the scripts in a dev environment and then apply them on a copy of
> production
> in a test environment. If you are adding a NOT NULL column to a
> production
> table, this can take a very long time, if you have many millions of rows.
> You may have to XXX the column as NULL and then populate the table
> iteratively. This, too, should be tested on a copy of prod first.
Grr, Tom's right. I meant NOT NULL, not DEFAULT in my post.
And I'll also emphasize the point he made.. do this on a copy first if you
can.
Test out your procedure as much as possible.
And have a roll-back plan!
> --
> Tom
> ----
> Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
> SQL Server MVP
> Toronto, ON Canada
> .
> "Sandy" <Sandy@.discussions.microsoft.com> wrote in message
> news:19C65DEA-BF61-4B4D-88D9-67BA62C828A6@.microsoft.com...
> Hello -
> I need to find out how to be able to change a production database, i.e.,
> adding a column to a table that will necessitate changes to multiple
> stored
> procedures, without shutting down the application that depends on the
> database (or at least shutting it down for the least amount of time
> possible). How are 24 X 7 production database changes normally made?
> Can anyone explain the procedure to me and/or provide links to reading
> material that explains how to do this?
> Any help will be greatly appreciated. I am a programmer and these are
> entirely new waters for me!
> --
> Sandy
>
Greg Moore
SQL Server DBA Consulting Remote and Onsite available!
Email: sql (at) greenms.com http://www.greenms.com/sqlserver.html
|||Typically, you don't have a test/pre-prod database on the same physical
server. Rather, you have a separate box, of identical hardware (ideally).
This way, if your testing produces some undesirable effect, it's not
affecting production.
Whatever script you intend to run in prod - including an ALTER TABLE
script - would be run here. Once you are satisfied that it is working
properly, then run the same script in prod.
An ALTER PROC statement can be run in prod (after testing, of course). This
typically takes no time and if there is an error, the ALTER PROC fails and
you have the original proc.
Tom
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
SQL Server MVP
Toronto, ON Canada
..
"Sandy" <Sandy@.discussions.microsoft.com> wrote in message
news:6B577153-11DD-4DB4-AA7C-3CDFCE9926A7@.microsoft.com...
Thanks for your response, Tom.
Is the copy of the production database in the test environment generally on
a different server?
How do I then update the production server with the changes, after I work
with them on the test server?
I don't understand how I can run Alter Procedure and Alter Table scripts on
a 24 X 7 production server without mega problems, unless I shut down the
server for at least the time it takes for the scripts to run. Can you
clarify a little bit?
Sandy
"Tom Moreau" wrote:
> Stuff like this is usually done through a documented process. You develop
> the scripts in a dev environment and then apply them on a copy of
> production
> in a test environment. If you are adding a NOT NULL column to a
> production
> table, this can take a very long time, if you have many millions of rows.
> You may have to XXX the column as NULL and then populate the table
> iteratively. This, too, should be tested on a copy of prod first.
> --
> Tom
> ----
> Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
> SQL Server MVP
> Toronto, ON Canada
> ..
> "Sandy" <Sandy@.discussions.microsoft.com> wrote in message
> news:19C65DEA-BF61-4B4D-88D9-67BA62C828A6@.microsoft.com...
> Hello -
> I need to find out how to be able to change a production database, i.e.,
> adding a column to a table that will necessitate changes to multiple
> stored
> procedures, without shutting down the application that depends on the
> database (or at least shutting it down for the least amount of time
> possible). How are 24 X 7 production database changes normally made?
> Can anyone explain the procedure to me and/or provide links to reading
> material that explains how to do this?
> Any help will be greatly appreciated. I am a programmer and these are
> entirely new waters for me!
> --
> Sandy
>
|||"Tom Moreau" <tom@.dont.spam.me.cips.ca> wrote in message
news:OVRU8FbcHHA.4888@.TK2MSFTNGP02.phx.gbl...
> Typically, you don't have a test/pre-prod database on the same physical
> server. Rather, you have a separate box, of identical hardware (ideally).
> This way, if your testing produces some undesirable effect, it's not
> affecting production.
To expand upon this, we had a dev environment, a QA, and a prod. And
sometimes even setup a "staging" environment.
Dev code could be all over the place. QA typically mirrored Prod, or real
close to it. (We'd track differences).
Staging was used for major changes or ones we expected to be more difficult.
Typically we'd take a log-shipped copy of the Production environment. That
way we'd know we were dealing with up to date data AND the same amount of
data. (Dev, QA were typically far smaller data sets.)
> Whatever script you intend to run in prod - including an ALTER TABLE
> script - would be run here. Once you are satisfied that it is working
> properly, then run the same script in prod.
> An ALTER PROC statement can be run in prod (after testing, of course).
> This
> typically takes no time and if there is an error, the ALTER PROC fails and
> you have the original proc.
Ayup. Typically in my experience, the biggest problem was getting my ALTER
PROC statement to run.
Often it was blocked by 100s of calls to the existing one that had been made
before it.
We typically created a change plan.
In it we'd detail the changes to be made (and often why).
The plan to be followed.
The tests to make sure that it succeeded.
And very importantly, the "what if something goes wrong?" plan.
Sometimes that was trivial sometimes it was fairly complex.
If it was a single procedure and the "what if" was "it didn't get applied"
then we'd do nothing.
Or worse, it was applied, but we found out the new code syntactically was
correct, but was much slower than we could afford it to be, what would we do
then?
The big question really comes down to how long the alter table scripts will
run.
Another option (which does NOT work really in a replicated environment) is
create a NEW table with all the data, fields you want, and then rename the
old one and then the new one.
This can be useful in some cases, but not all.
> --
> Tom
> ----
> Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
> SQL Server MVP
> Toronto, ON Canada
> .
> "Sandy" <Sandy@.discussions.microsoft.com> wrote in message
> news:6B577153-11DD-4DB4-AA7C-3CDFCE9926A7@.microsoft.com...
> Thanks for your response, Tom.
> Is the copy of the production database in the test environment generally
> on
> a different server?
> How do I then update the production server with the changes, after I work
> with them on the test server?
> I don't understand how I can run Alter Procedure and Alter Table scripts
> on
> a 24 X 7 production server without mega problems, unless I shut down the
> server for at least the time it takes for the scripts to run. Can you
> clarify a little bit?
>
> --
> Sandy
>
Greg Moore
SQL Server DBA Consulting Remote and Onsite available!
Email: sql (at) greenms.com http://www.greenms.com/sqlserver.html
|||Thanks for responding, Greg -
>In other words you don't do "select *" any place.
What do you mean by the above?
What do you mean by: ". . . have a script that sets the database to single
user mode, executes your script and sets it back"?
Also, what is your best suggestion for a "rollback plan"?
Any help is greatly appreciated!
Sandy
"Greg D. Moore (Strider)" wrote:
> "Sandy" <Sandy@.discussions.microsoft.com> wrote in message
> news:19C65DEA-BF61-4B4D-88D9-67BA62C828A6@.microsoft.com...
> "It depends".
> Partly on the scale of the database, the changes to be made, and the
> resources available.
> Ok, that doesn't help much I know.
> Adding a column:
> Depends on if the column has defaults or not. If you add a column with
> defaults, then you're almost certainly going to lock the table for some
> amount of time you may find unacceptable.
> Now, if your stored procedures and queries are written correctly, adding a
> column to a table should not be an issue. In other words you don't do
> "select *" any place.
> Now, for the stored procs, generally I've had "luck" doing what I needed in
> a batch. Worse case though, have a script that sets the database to single
> user mode, executes your script and sets it back. But watch out for errors.
> It can be "bad" to have 1/2 your stored procs update, but not the other 1/2.
> ;-)
>
> Another option is if you have a replicated copy or log-shipped copy of the
> database you can use, is move traffic there, make changes and move back.
> The problem then is handling transactions that occurred in the meantime.
>
> I don't unfortunately. In my last job I did this sort of thing all the
> time, but also knew the system pretty well (but still messed up a few times
> ;-)
> Contact me if you need more advice.
>
> --
> Greg Moore
> SQL Server DBA Consulting Remote and Onsite available!
> Email: sql (at) greenms.com http://www.greenms.com/sqlserver.html
>
>
|||"Sandy" <Sandy@.discussions.microsoft.com> wrote in message
news:A0393922-029D-4B8D-AAD9-D79F8547556A@.microsoft.com...
> Thanks for responding, Greg -
>
Anytime you do a select in production code, you want to explicitely provide
the columns you are selecting.
Some programmers, being lazy will say write a select as;
SELECT * FROM tablename where Foo='bar'
This is wrong on several levels:
1) It probably brings back too much data, which is slower and consumes
bandwidth, etc.
2) It assumes the order columns are returned is constant, which is
safe...99% of the time. But not 100%
3) It assumes the number of columns is fixed. So if you add a column to the
underlying table, the query now returns one more row, which the calling code
now has to deal with. If you specify columns you only need to update the
selects that need the extra column.
> What do you mean by the above?
> What do you mean by: ". . . have a script that sets the database to
> single
> user mode, executes your script and sets it back"?
ALTER DATABASE testing SET SINGLE_USER WITH ROLLBACK IMMEDIATE
<your scripts here>
ALTER DATABASE testing SET MULTI_USER
That simply assures no one else is doing anything while your scripts run.
> Also, what is your best suggestion for a "rollback plan"?
Planning :-)
Seriously, this is a fairly complex question and depends a lot on your
situation. It's not something I can answer too easily over a newsgroup.
But basically think about what can go wrong and how to mitigate the
problems.
Create a script to add your columns and update your stored procs.
Create a script to REMOVE your columns and restore your old stored procs.
Just in case.
Stuff like that.
> Any help is greatly appreciated!
> --
> Sandy
Greg Moore
SQL Server DBA Consulting Remote and Onsite available!
Email: sql (at) greenms.com http://www.greenms.com/sqlserver.html
sql
DBA for multiple servers
If so what are the main startegy and main challenges you had? what is
your best practice in terms of day to day admin and backup and
recovery?soalvajavab1@.yahoo.com wrote:
> Is anybody has experience to DBA more than 100 SQl Servers at a time?
> If so what are the main startegy and main challenges you had? what is
> your best practice in terms of day to day admin and backup and
> recovery?
>
I don't think that tasks are that much different when you have 5 or 100
SQL servers, so could you be a bit more specific in what you'd like to
know? From my point of view it's quite important that the backup jobs
works. I've written my own scripts that automatically backups up all
databases on a SQL server, so once these has been scheduled they just
notify me when they fails. You'd then also like to monitor the free
space on your disks on the server, but for that the company most likely
has a monitoring tool for that.
In terms of what more you'd like to do is a matter of personal
preference and company policy. E.g. I know that many people don't like
to have their databases set to autogrow, but I normally do that for most
databases. Currently I'm looking after approx. 1000 SQL server databases
and for me it works fine to have them set to autogrow. I've then created
a stored proc that runs every night on most of the servers and collect a
number of data for each database. This tells me when there has been
created new databases, when a database has been deleted, file growth,
backup file growth, recovery mode change etc. I then get an email every
morning with the changes.
Apart from the above, there are a number of database monitoring tools
available on the market. I haven't actually used any of them yet but I
think they can do some of what I've done myself (..and of course a lot
more...). Maybe somebody else has some experience on this?
Bottomline is that what you need to look for isn't a standard answer,
but pretty much depends on the setup and your company/company policy. I
also think that the DBA role can be quite different from one company to
another.
Regards
Steen Schlter Persson
Database Administrator / System Administrator|||I think that with that number-- you'll just need to follow these guidelines:
a) use profiler on all machines
b) run index tuning wizard on your busiest databases
c) build poormon enterprise level performan aggregation tools
d) encourage people to use Reporting Services and Analysis Services in order
to leverage this mountain of data
<soalvajavab1@.yahoo.com> wrote in message
news:1177946795.389483.51580@.y80g2000hsf.googlegroups.com...
> Is anybody has experience to DBA more than 100 SQl Servers at a time?
> If so what are the main startegy and main challenges you had? what is
> your best practice in terms of day to day admin and backup and
> recovery?
>sql
DBA for multiple servers
If so what are the main startegy and main challenges you had? what is
your best practice in terms of day to day admin and backup and
recovery?soalvajavab1@.yahoo.com wrote:
> Is anybody has experience to DBA more than 100 SQl Servers at a time?
> If so what are the main startegy and main challenges you had? what is
> your best practice in terms of day to day admin and backup and
> recovery?
>
I don't think that tasks are that much different when you have 5 or 100
SQL servers, so could you be a bit more specific in what you'd like to
know? From my point of view it's quite important that the backup jobs
works. I've written my own scripts that automatically backups up all
databases on a SQL server, so once these has been scheduled they just
notify me when they fails. You'd then also like to monitor the free
space on your disks on the server, but for that the company most likely
has a monitoring tool for that.
In terms of what more you'd like to do is a matter of personal
preference and company policy. E.g. I know that many people don't like
to have their databases set to autogrow, but I normally do that for most
databases. Currently I'm looking after approx. 1000 SQL server databases
and for me it works fine to have them set to autogrow. I've then created
a stored proc that runs every night on most of the servers and collect a
number of data for each database. This tells me when there has been
created new databases, when a database has been deleted, file growth,
backup file growth, recovery mode change etc. I then get an email every
morning with the changes.
Apart from the above, there are a number of database monitoring tools
available on the market. I haven't actually used any of them yet but I
think they can do some of what I've done myself (..and of course a lot
more...). Maybe somebody else has some experience on this?
Bottomline is that what you need to look for isn't a standard answer,
but pretty much depends on the setup and your company/company policy. I
also think that the DBA role can be quite different from one company to
another.
--
Regards
Steen Schlüter Persson
Database Administrator / System Administrator|||I think that with that number-- you'll just need to follow these guidelines:
a) use profiler on all machines
b) run index tuning wizard on your busiest databases
c) build poormon enterprise level performan aggregation tools
d) encourage people to use Reporting Services and Analysis Services in order
to leverage this mountain of data
<soalvajavab1@.yahoo.com> wrote in message
news:1177946795.389483.51580@.y80g2000hsf.googlegroups.com...
> Is anybody has experience to DBA more than 100 SQl Servers at a time?
> If so what are the main startegy and main challenges you had? what is
> your best practice in terms of day to day admin and backup and
> recovery?
>
DBA for multiple servers
If so what are the main startegy and main challenges you had? what is
your best practice in terms of day to day admin and backup and
recovery?
soalvajavab1@.yahoo.com wrote:
> Is anybody has experience to DBA more than 100 SQl Servers at a time?
> If so what are the main startegy and main challenges you had? what is
> your best practice in terms of day to day admin and backup and
> recovery?
>
I don't think that tasks are that much different when you have 5 or 100
SQL servers, so could you be a bit more specific in what you'd like to
know? From my point of view it's quite important that the backup jobs
works. I've written my own scripts that automatically backups up all
databases on a SQL server, so once these has been scheduled they just
notify me when they fails. You'd then also like to monitor the free
space on your disks on the server, but for that the company most likely
has a monitoring tool for that.
In terms of what more you'd like to do is a matter of personal
preference and company policy. E.g. I know that many people don't like
to have their databases set to autogrow, but I normally do that for most
databases. Currently I'm looking after approx. 1000 SQL server databases
and for me it works fine to have them set to autogrow. I've then created
a stored proc that runs every night on most of the servers and collect a
number of data for each database. This tells me when there has been
created new databases, when a database has been deleted, file growth,
backup file growth, recovery mode change etc. I then get an email every
morning with the changes.
Apart from the above, there are a number of database monitoring tools
available on the market. I haven't actually used any of them yet but I
think they can do some of what I've done myself (..and of course a lot
more...). Maybe somebody else has some experience on this?
Bottomline is that what you need to look for isn't a standard answer,
but pretty much depends on the setup and your company/company policy. I
also think that the DBA role can be quite different from one company to
another.
Regards
Steen Schlter Persson
Database Administrator / System Administrator
|||I think that with that number-- you'll just need to follow these guidelines:
a) use profiler on all machines
b) run index tuning wizard on your busiest databases
c) build poormon enterprise level performan aggregation tools
d) encourage people to use Reporting Services and Analysis Services in order
to leverage this mountain of data
<soalvajavab1@.yahoo.com> wrote in message
news:1177946795.389483.51580@.y80g2000hsf.googlegro ups.com...
> Is anybody has experience to DBA more than 100 SQl Servers at a time?
> If so what are the main startegy and main challenges you had? what is
> your best practice in terms of day to day admin and backup and
> recovery?
>
Thursday, March 22, 2012
db theory advice multiple catagories per item
I know there isnt a single rule, I am just looking for a solution. How should the database sturucture be built with this in mind starting out with what is listed below?? Mabey an attributes table?
Items
ItemID ItemName CatagoryID
Catagories
CatagoryID CatagoryNameIf items can indeed be in more then one category, your coworker was probably right. That would be something like:
Items
ItemID ItemName
Catagories
CatagoryName
ItemCategoryMatrix
ItemID
CategoryName
If it can only be in one category, your solution would be better. You might want to have retail category and search category or something. It just depends what the business needs are.|||I guess my question would be this: if I asked 1000 retail stores/online ecommerce site how they built their database, which one would they use? What percent would use the matrix you described and what percent would use one catagory per item?
Let me throw in the fact that the co worker is not the best database programer so, what he did might, not be necessary......|||An awful lot depends on the industry/target market, and the complexity of the web site. If you have an auto-parts store, odds are good you can deal with one category per item. If you have Wal-Mart, you need to support multiple categories per item.
While any business can use a "single inheritance" tree with only one category per item (leaf node), as the business gets more complex and wants to reach more customers, the pressure increases to support more complex relationships using something like the link table "matrix" or something like it.
-PatP|||Your coworker is proactively thinking about functionality and scalability in the design. He sounds like a damn-fine database programmer to me. Tell 'im the blindman said so.|||and even if you're sure that each item belongs to only one category, it is still advisable to implement the many-to-many relationship table that derrick suggested
you can populate it with one-to-many data easily -- in fact, if you wish, you can even add a constraint to enforce the rule that an item can belong to only one category
but if it ever has to change, you simple allow multiple rows, and very little other changes (e.g. to queries) will be required
whereas with the initial design, with the category foreign key right in the item record, the change is extremely disruptive|||Yeah! Tell 'im Rudy the Canadian Dude says so too. We have international agreement now.
Good enough?
Helluva lot easier than calling up 1000 retail stores! As if THEY knew how to design databases...
Monday, March 19, 2012
DB Restore from multiple files
I've got a pretty large database that i normally backup to 6 files and
restore it to another database. I've only done this through enterprise
manager so it's just point and click. But i was wondering if anyone knew what
the command line version of this would be? Would you use "stripe" ? Trying to
script the backup and restore part. Any help would be greatly appreciated.
-FS
Hello,
See RESTORE DATABASE command in books online. Use the below sample.
RESTORE DATABASE DBNAME FROM disk = 'C:\MSSQL\BACKUP\DBA_db_BKUP_BAK1.BAK',
disk = 'D:\MSSQL\MSSQL\BACKUP\DBA_db_BKUP_BAK2.BAK',disk =
'E:\MSSQL\MSSQL\BACKUP\DBA_db_BKUP_BAK3.BAK', Stats=1
Thanks
Hari
"FS" <FS@.discussions.microsoft.com> wrote in message
news:28DD5BE8-5B09-4262-81FE-B3C6CA5FB636@.microsoft.com...
> Hi,
> I've got a pretty large database that i normally backup to 6 files and
> restore it to another database. I've only done this through enterprise
> manager so it's just point and click. But i was wondering if anyone knew
> what
> the command line version of this would be? Would you use "stripe" ? Trying
> to
> script the backup and restore part. Any help would be greatly appreciated.
> -FS
DB Restore from multiple files
I've got a pretty large database that i normally backup to 6 files and
restore it to another database. I've only done this through enterprise
manager so it's just point and click. But i was wondering if anyone knew wha
t
the command line version of this would be? Would you use "stripe" ? Trying t
o
script the backup and restore part. Any help would be greatly appreciated.
-FSHello,
See RESTORE DATABASE command in books online. Use the below sample.
RESTORE DATABASE DBNAME FROM disk = 'C:\MSSQL\BACKUP\DBA_db_BKUP_BAK1.BAK',
disk = 'D:\MSSQL\MSSQL\BACKUP\DBA_db_BKUP_BAK2.BAK',disk =
'E:\MSSQL\MSSQL\BACKUP\DBA_db_BKUP_BAK3.BAK', Stats=1
Thanks
Hari
"FS" <FS@.discussions.microsoft.com> wrote in message
news:28DD5BE8-5B09-4262-81FE-B3C6CA5FB636@.microsoft.com...
> Hi,
> I've got a pretty large database that i normally backup to 6 files and
> restore it to another database. I've only done this through enterprise
> manager so it's just point and click. But i was wondering if anyone knew
> what
> the command line version of this would be? Would you use "stripe" ? Trying
> to
> script the backup and restore part. Any help would be greatly appreciated.
> -FS
DB Restore from multiple files
I've got a pretty large database that i normally backup to 6 files and
restore it to another database. I've only done this through enterprise
manager so it's just point and click. But i was wondering if anyone knew what
the command line version of this would be? Would you use "stripe" ? Trying to
script the backup and restore part. Any help would be greatly appreciated.
-FSHello,
See RESTORE DATABASE command in books online. Use the below sample.
RESTORE DATABASE DBNAME FROM disk = 'C:\MSSQL\BACKUP\DBA_db_BKUP_BAK1.BAK',
disk = 'D:\MSSQL\MSSQL\BACKUP\DBA_db_BKUP_BAK2.BAK',disk ='E:\MSSQL\MSSQL\BACKUP\DBA_db_BKUP_BAK3.BAK', Stats=1
Thanks
Hari
"FS" <FS@.discussions.microsoft.com> wrote in message
news:28DD5BE8-5B09-4262-81FE-B3C6CA5FB636@.microsoft.com...
> Hi,
> I've got a pretty large database that i normally backup to 6 files and
> restore it to another database. I've only done this through enterprise
> manager so it's just point and click. But i was wondering if anyone knew
> what
> the command line version of this would be? Would you use "stripe" ? Trying
> to
> script the backup and restore part. Any help would be greatly appreciated.
> -FS
Sunday, March 11, 2012
DB Replication or Table Replication via triggers?
I am involved in a scenario where there is a huge (SQL Server 2005)
production database containing tables that are updated multiple times
per second. End-user reports need to be generated against the data in
this database, and so the powers-that-be came to the conclusion that a
reporting database is necessary in order to offload report processing
from production; of course, this means that data will have to be
replicated to the reporting database. However, we do not need all of
the data in the production database, and perhaps a filtering criteria
can be established where only certain rows are replicated over to the
reporting database as they're inserted (and possibly updated/deleted).
The current though process is that the programmers designing the
queries/reports will know exactly what data they need from production
and be able to modify the replication criteria as needed. For example,
programmer A might write a report where the data he needs can be
expressed in a simple replication criteria for table T where column X
= "WOOD" and column Y = "MAHOGANY". Programmer B might come along a
month later and write a report whose relies on the same table T where
column X = "METAL" and column Z in (12, 24, 36). Programmer B will
have to modify Programmer A's replication criteria in such a way as to
accomodate both reports, in this case something like "Copy rows from
table T where (col X = "WOOD" and col Y = "MAHOGANY") or (col X =
"METAL" and col Z in (12, 24, 36))". The example I gave is really
trivial of course but is sufficient to give you an idea of what the
current thought-process is.
I assume that this is a requirement that many of you may have
encountered in the past and I am wondering what solutions you were
able to come up with. Personally, I believe that the above method is
prone to error (in this case the use of triggers to specify
replication criteria) and I'd much rather use replication services to
copy tables in their entirety. However, this does not seem to be an
option in my case due to the sheer size of certain tables. Is there
anything out there that performs replication based on complex
programmer defined criteria? Are triggers a viable alternative? Any
alternative out-of-the-box solutions?
Any feedback would be appreciated.
Regards!
AnthonyI'd recommend transactional replication with a nosync initialization. This
is where the initial setup on the reporting server is achieved by using a
restore of the database and after that, only subsequent changes are sent
down. If you are using SQL Server 2005, greater concurrency can be achieved
by using the read committed snapshot isolation level.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com|||Hello Paul,
Thank you for your response. I will look into SQL Server 2005's
replication and static row level filtering; are there any books or web-
sites you might recommend? I will need to be able to set up and modify
the row-filter criteria programatically, and the reviews @. Barnes and
Noble on "Pro SQL Server 2005 Replication" are pretty dismal.
Regards,
Anthony
On Apr 17, 3:09 pm, "Paul Ibison" <Paul.Ibi...@.Pygmalion.Comwrote:
Quote:
Originally Posted by
I'd recommend transactional replication with a nosync initialization. This
is where the initial setup on the reporting server is achieved by using a
restore of the database and after that, only subsequent changes are sent
down. If you are using SQL Server 2005, greater concurrency can be achieved
by using the read committed snapshot isolation level.
Cheers,
Paul Ibison SQL Server MVP,www.replicationanswers.com
after more merge info and don't like the Pro book then it's really BOL that
you need and then doing some scenarios for yourself to gain experience. As
for websites, I have some useful info on the site below and there are other
articles out on the various SQL Server sites you can get by googling, but
nothing I think specific to your requirements.
BTW this doesn't really lend itself to modifying the filters dynamically. At
least this is not as straightforward as you might think. Normally the
partitions are well designed to start with. If you want something more
dynamic, then I'd not filter at all in replication and I'd use filters on
the client application instead.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com|||Hello Paul,
On Apr 17, 6:26 pm, "Paul Ibison" <Paul.Ibi...@.Pygmalion.Comwrote:
Quote:
Originally Posted by
Have a look at Hilary's book for snapshot and transactional, but if you're
after more merge info and don't like the Pro book then it's really BOL that
you need and then doing some scenarios for yourself to gain experience. As
for websites, I have some useful info on the site below and there are other
articles out on the various SQL Server sites you can get by googling, but
nothing I think specific to your requirements.
Will do.
Quote:
Originally Posted by
BTW this doesn't really lend itself to modifying the filters dynamically. At
least this is not as straightforward as you might think. Normally the
partitions are well designed to start with. If you want something more
dynamic, then I'd not filter at all in replication and I'd use filters on
the client application instead.
Cheers,
Paul Ibison SQL Server MVP,www.replicationanswers.com
Ahhh... then that's a problem, I'd definitely need the ability to be
able to programatically and dynamically change the filtering criteria
as the need arises, in this case every time a new report is requested
that needs a subset of data not being captured by the replication
process. You would think that this is such a common scenario... Also,
filtering on the client side is not an option either since that would
mean that all of the data would get replicated to the reporting db. I
could have sworn that I read in msdn that the filters could be changed
via stored procs though... I'll have to look that up.
Thanks for your help Paul!
Anthony|||Anthony Paul wrote:
Quote:
Originally Posted by
I am involved in a scenario where there is a huge (SQL Server 2005)
production database containing tables that are updated multiple times
per second. End-user reports need to be generated against the data in
this database, and so the powers-that-be came to the conclusion that a
reporting database is necessary in order to offload report processing
from production; of course, this means that data will have to be
replicated to the reporting database. However, we do not need all of
the data in the production database, and perhaps a filtering criteria
can be established where only certain rows are replicated over to the
reporting database as they're inserted (and possibly updated/deleted).
The current though process is that the programmers designing the
queries/reports will know exactly what data they need from production
and be able to modify the replication criteria as needed. For example,
programmer A might write a report where the data he needs can be
expressed in a simple replication criteria for table T where column X
= "WOOD" and column Y = "MAHOGANY". Programmer B might come along a
month later and write a report whose relies on the same table T where
column X = "METAL" and column Z in (12, 24, 36). Programmer B will
have to modify Programmer A's replication criteria in such a way as to
accomodate both reports, in this case something like "Copy rows from
table T where (col X = "WOOD" and col Y = "MAHOGANY") or (col X =
"METAL" and col Z in (12, 24, 36))". The example I gave is really
trivial of course but is sufficient to give you an idea of what the
current thought-process is.
>
I assume that this is a requirement that many of you may have
encountered in the past and I am wondering what solutions you were
able to come up with. Personally, I believe that the above method is
prone to error (in this case the use of triggers to specify
replication criteria) and I'd much rather use replication services to
copy tables in their entirety. However, this does not seem to be an
option in my case due to the sheer size of certain tables. Is there
anything out there that performs replication based on complex
programmer defined criteria? Are triggers a viable alternative? Any
alternative out-of-the-box solutions?
Is it possible to create views, then configure things so that just
those views are replicated as tables on the second server?|||Hello Ed,
That's a very good idea, if it turns out that the filter isn't
dynamically configurable then perhaps using a view as a filter and
replicating the view can compensate since views can be modified at any
time. However, I doubt that replication can be done on a view rather
than on a table. I'll have to check it out!
Regards,
Anthony
On Apr 17, 9:46 pm, Ed Murphy <emurph...@.socal.rr.comwrote:
Quote:
Originally Posted by
Anthony Paul wrote:
Quote:
Originally Posted by
I am involved in a scenario where there is a huge (SQL Server 2005)
production database containing tables that are updated multiple times
per second. End-user reports need to be generated against the data in
this database, and so the powers-that-be came to the conclusion that a
reporting database is necessary in order to offload report processing
from production; of course, this means that data will have to be
replicated to the reporting database. However, we do not need all of
the data in the production database, and perhaps a filtering criteria
can be established where only certain rows are replicated over to the
reporting database as they're inserted (and possibly updated/deleted).
The current though process is that the programmers designing the
queries/reports will know exactly what data they need from production
and be able to modify the replication criteria as needed. For example,
programmer A might write a report where the data he needs can be
expressed in a simple replication criteria for table T where column X
= "WOOD" and column Y = "MAHOGANY". Programmer B might come along a
month later and write a report whose relies on the same table T where
column X = "METAL" and column Z in (12, 24, 36). Programmer B will
have to modify Programmer A's replication criteria in such a way as to
accomodate both reports, in this case something like "Copy rows from
table T where (col X = "WOOD" and col Y = "MAHOGANY") or (col X =
"METAL" and col Z in (12, 24, 36))". The example I gave is really
trivial of course but is sufficient to give you an idea of what the
current thought-process is.
>
Quote:
Originally Posted by
I assume that this is a requirement that many of you may have
encountered in the past and I am wondering what solutions you were
able to come up with. Personally, I believe that the above method is
prone to error (in this case the use of triggers to specify
replication criteria) and I'd much rather use replication services to
copy tables in their entirety. However, this does not seem to be an
option in my case due to the sheer size of certain tables. Is there
anything out there that performs replication based on complex
programmer defined criteria? Are triggers a viable alternative? Any
alternative out-of-the-box solutions?
>
Is it possible to create views, then configure things so that just
those views are replicated as tables on the second server?- Hide quoted text -
>
- Show quoted text -|||I just finished looking up using indexed views versus a filter for
replication and it turns out that the view is much slower than a
filter (about 3x as slow) because the log reader has to log each
transaction twice, once for the view and once for the table. In my
case performance is of utmost concern so the overhead involved in this
is not something they can live with. Sigh...
Quote:
Originally Posted by
Quote:
Originally Posted by
Is it possible to create views, then configure things so that just
those views are replicated as tables on the second server?- Hide quoted text -
>
Quote:
Originally Posted by
- Show quoted text -- Hide quoted text -
>
- Show quoted text -|||On Apr 18, 12:24 am, Anthony Paul <anthonypa...@.gmail.comwrote:
Quote:
Originally Posted by
Hello Paul,
>
On Apr 17, 6:26 pm, "Paul Ibison" <Paul.Ibi...@.Pygmalion.Comwrote:
>
Quote:
Originally Posted by
Have a look at Hilary's book for snapshot and transactional, but if you're
after more merge info and don't like the Pro book then it's really BOL that
you need and then doing some scenarios for yourself to gain experience. As
for websites, I have some useful info on the site below and there are other
articles out on the various SQL Server sites you can get by googling, but
nothing I think specific to your requirements.
>
Will do.
>
Quote:
Originally Posted by
BTW this doesn't really lend itself to modifying the filters dynamically. At
least this is not as straightforward as you might think. Normally the
partitions are well designed to start with. If you want something more
dynamic, then I'd not filter at all in replication and I'd use filters on
the client application instead.
Cheers,
Paul Ibison SQL Server MVP,www.replicationanswers.com
>
Ahhh... then that's a problem, I'd definitely need the ability to be
able to programatically and dynamically change the filtering criteria
as the need arises, in this case every time a new report is requested
that needs a subset of data not being captured by the replication
process. You would think that this is such a common scenario... Also,
filtering on the client side is not an option either since that would
mean that all of the data would get replicated to the reporting db. I
could have sworn that I read in msdn that the filters could be changed
via stored procs though... I'll have to look that up.
>
Thanks for your help Paul!
>
Anthony
This may sound like a stupid question, but are you sure replicating
the whole database isn't an option? I know you've described the
database as huge, but one mans huge is another mans insignificant (or
the other way around).
It just sounds like you're putting in a lot of work when you may be
able to keep it simple. Apologies if this is a path you've already
worn smooth, just wondering what has made you sure that bog standard
replication isn't the way to go.
Damien|||Hello Damien,
I would *love* to have a full replication going and not have to worry
about the added complexity of creating dynamic filters or triggers,
but the powers that be simply do not consider it an option. Since I'm
not the one that makes the decisions, I can only go by whatever
options are available. They want ONLY a subset of data to be captured,
nothing more. That would be fine with me if the filter was static (ie.
not subject to change every time a new report is requested) but given
the requirements I am in the same camp as you that a full replication
would be best.
Regards,
Anthony
Quote:
Originally Posted by
This may sound like a stupid question, but are you sure replicating
the whole database isn't an option? I know you've described the
database as huge, but one mans huge is another mans insignificant (or
the other way around).
>
It just sounds like you're putting in a lot of work when you may be
able to keep it simple. Apologies if this is a path you've already
worn smooth, just wondering what has made you sure that bog standard
replication isn't the way to go.
>
Damien- Hide quoted text -
>
- Show quoted text -
Quote:
Originally Posted by
I would *love* to have a full replication going and not have to worry
about the added complexity of creating dynamic filters or triggers,
but the powers that be simply do not consider it an option. Since I'm
not the one that makes the decisions, I can only go by whatever
options are available. They want ONLY a subset of data to be captured,
nothing more. That would be fine with me if the filter was static (ie.
not subject to change every time a new report is requested) but given
the requirements I am in the same camp as you that a full replication
would be best.
TPTB may start considering it an option if you give them a cost
analysis, depending on whether their previous motivation was "wouldn't
it be nice if" (yes, but) or "we think this is cheaper" (no it isn't)
or "this is required for security reasons" (ugh, okay) or whatever.|||Anthony Paul (anthonypaulo@.gmail.com) writes:
Quote:
Originally Posted by
I would *love* to have a full replication going and not have to worry
about the added complexity of creating dynamic filters or triggers,
but the powers that be simply do not consider it an option. Since I'm
not the one that makes the decisions, I can only go by whatever
options are available. They want ONLY a subset of data to be captured,
nothing more. That would be fine with me if the filter was static (ie.
not subject to change every time a new report is requested) but given
the requirements I am in the same camp as you that a full replication
would be best.
From my meager experience of replication, it seems clear that the database
has to be really huge - several terabytes - to make a dynamic filtering
defensible from a cost perspective. It would be difficult to develop,
difficult to maintain and manage.
The only serious option I see to full replication is a static subset.
That is define what will be supported in replication V1. If a new reqiure-
ment that is not covered, it would have to wait to V2. The idea would
of course to only strip really big stuff with low proability to be included.
And this is what you should tell the powers that be: replicating the entire
database will be far less expensive than changing what is replicated
dynamically.
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx
DB Replication or Table Replication via triggers?
I am involved in a scenario where there is a huge (SQL Server 2005)
production database containing tables that are updated multiple times
per second. End-user reports need to be generated against the data in
this database, and so the powers-that-be came to the conclusion that a
reporting database is necessary in order to offload report processing
from production; of course, this means that data will have to be
replicated to the reporting database. However, we do not need all of
the data in the production database, and perhaps a filtering criteria
can be established where only certain rows are replicated over to the
reporting database as they're inserted (and possibly updated/deleted).
The current though process is that the programmers designing the
queries/reports will know exactly what data they need from production
and be able to modify the replication criteria as needed. For example,
programmer A might write a report where the data he needs can be
expressed in a simple replication criteria for table T where column X
= "WOOD" and column Y = "MAHOGANY". Programmer B might come along a
month later and write a report whose relies on the same table T where
column X = "METAL" and column Z in (12, 24, 36). Programmer B will
have to modify Programmer A's replication criteria in such a way as to
accomodate both reports, in this case something like "Copy rows from
table T where (col X = "WOOD" and col Y = "MAHOGANY") or (col X =
"METAL" and col Z in (12, 24, 36))". The example I gave is really
trivial of course but is sufficient to give you an idea of what the
current thought-process is.
I assume that this is a requirement that many of you may have
encountered in the past and I am wondering what solutions you were
able to come up with. Personally, I believe that the above method is
prone to error (in this case the use of triggers to specify
replication criteria) and I'd much rather use replication services to
copy tables in their entirety. However, this does not seem to be an
option in my case due to the sheer size of certain tables. Is there
anything out there that performs replication based on complex
programmer defined criteria? Are triggers a viable alternative? Any
alternative out-of-the-box solutions?
Any feedback would be appreciated.
Regards!
Anthony
Hello Paul,
Thank you for your response. I will look into SQL Server 2005's
replication and static row level filtering; are there any books or web-
sites you might recommend? I will need to be able to set up and modify
the row-filter criteria programatically, and the reviews @. Barnes and
Noble on "Pro SQL Server 2005 Replication" are pretty dismal.
Regards,
Anthony
On Apr 17, 3:09 pm, "Paul Ibison" <Paul.Ibi...@.Pygmalion.Com> wrote:
> I'd recommend transactional replication with a nosync initialization. This
> is where the initial setup on the reporting server is achieved by using a
> restore of the database and after that, only subsequent changes are sent
> down. If you are using SQL Server 2005, greater concurrency can be achieved
> by using the read committed snapshot isolation level.
> Cheers,
> Paul Ibison SQL Server MVP,www.replicationanswers.com
|||Hello Paul,
On Apr 17, 6:26 pm, "Paul Ibison" <Paul.Ibi...@.Pygmalion.Com> wrote:
> Have a look at Hilary's book for snapshot and transactional, but if you're
> after more merge info and don't like the Pro book then it's really BOL that
> you need and then doing some scenarios for yourself to gain experience. As
> for websites, I have some useful info on the site below and there are other
> articles out on the various SQL Server sites you can get by googling, but
> nothing I think specific to your requirements.
Will do.
> BTW this doesn't really lend itself to modifying the filters dynamically. At
> least this is not as straightforward as you might think. Normally the
> partitions are well designed to start with. If you want something more
> dynamic, then I'd not filter at all in replication and I'd use filters on
> the client application instead.
> Cheers,
> Paul Ibison SQL Server MVP,www.replicationanswers.com
Ahhh... then that's a problem, I'd definitely need the ability to be
able to programatically and dynamically change the filtering criteria
as the need arises, in this case every time a new report is requested
that needs a subset of data not being captured by the replication
process. You would think that this is such a common scenario... Also,
filtering on the client side is not an option either since that would
mean that all of the data would get replicated to the reporting db. I
could have sworn that I read in msdn that the filters could be changed
via stored procs though... I'll have to look that up.
Thanks for your help Paul!
Anthony
|||Anthony Paul wrote:
> I am involved in a scenario where there is a huge (SQL Server 2005)
> production database containing tables that are updated multiple times
> per second. End-user reports need to be generated against the data in
> this database, and so the powers-that-be came to the conclusion that a
> reporting database is necessary in order to offload report processing
> from production; of course, this means that data will have to be
> replicated to the reporting database. However, we do not need all of
> the data in the production database, and perhaps a filtering criteria
> can be established where only certain rows are replicated over to the
> reporting database as they're inserted (and possibly updated/deleted).
> The current though process is that the programmers designing the
> queries/reports will know exactly what data they need from production
> and be able to modify the replication criteria as needed. For example,
> programmer A might write a report where the data he needs can be
> expressed in a simple replication criteria for table T where column X
> = "WOOD" and column Y = "MAHOGANY". Programmer B might come along a
> month later and write a report whose relies on the same table T where
> column X = "METAL" and column Z in (12, 24, 36). Programmer B will
> have to modify Programmer A's replication criteria in such a way as to
> accomodate both reports, in this case something like "Copy rows from
> table T where (col X = "WOOD" and col Y = "MAHOGANY") or (col X =
> "METAL" and col Z in (12, 24, 36))". The example I gave is really
> trivial of course but is sufficient to give you an idea of what the
> current thought-process is.
> I assume that this is a requirement that many of you may have
> encountered in the past and I am wondering what solutions you were
> able to come up with. Personally, I believe that the above method is
> prone to error (in this case the use of triggers to specify
> replication criteria) and I'd much rather use replication services to
> copy tables in their entirety. However, this does not seem to be an
> option in my case due to the sheer size of certain tables. Is there
> anything out there that performs replication based on complex
> programmer defined criteria? Are triggers a viable alternative? Any
> alternative out-of-the-box solutions?
Is it possible to create views, then configure things so that just
those views are replicated as tables on the second server?
|||Hello Ed,
That's a very good idea, if it turns out that the filter isn't
dynamically configurable then perhaps using a view as a filter and
replicating the view can compensate since views can be modified at any
time. However, I doubt that replication can be done on a view rather
than on a table. I'll have to check it out!
Regards,
Anthony
On Apr 17, 9:46 pm, Ed Murphy <emurph...@.socal.rr.com> wrote:
> Anthony Paul wrote:
>
> Is it possible to create views, then configure things so that just
> those views are replicated as tables on the second server... Hide quoted text -
> - Show quoted text -
|||I just finished looking up using indexed views versus a filter for
replication and it turns out that the view is much slower than a
filter (about 3x as slow) because the log reader has to log each
transaction twice, once for the view and once for the table. In my
case performance is of utmost concern so the overhead involved in this
is not something they can live with. Sigh...
>
> - Show quoted text -
|||On Apr 18, 12:24 am, Anthony Paul <anthonypa...@.gmail.com> wrote:
> Hello Paul,
> On Apr 17, 6:26 pm, "Paul Ibison" <Paul.Ibi...@.Pygmalion.Com> wrote:
>
> Will do.
>
> Ahhh... then that's a problem, I'd definitely need the ability to be
> able to programatically and dynamically change the filtering criteria
> as the need arises, in this case every time a new report is requested
> that needs a subset of data not being captured by the replication
> process. You would think that this is such a common scenario... Also,
> filtering on the client side is not an option either since that would
> mean that all of the data would get replicated to the reporting db. I
> could have sworn that I read in msdn that the filters could be changed
> via stored procs though... I'll have to look that up.
> Thanks for your help Paul!
> Anthony
This may sound like a stupid question, but are you sure replicating
the whole database isn't an option? I know you've described the
database as huge, but one mans huge is another mans insignificant (or
the other way around).
It just sounds like you're putting in a lot of work when you may be
able to keep it simple. Apologies if this is a path you've already
worn smooth, just wondering what has made you sure that bog standard
replication isn't the way to go.
Damien
|||Hello Damien,
I would *love* to have a full replication going and not have to worry
about the added complexity of creating dynamic filters or triggers,
but the powers that be simply do not consider it an option. Since I'm
not the one that makes the decisions, I can only go by whatever
options are available. They want ONLY a subset of data to be captured,
nothing more. That would be fine with me if the filter was static (ie.
not subject to change every time a new report is requested) but given
the requirements I am in the same camp as you that a full replication
would be best.
Regards,
Anthony
> This may sound like a stupid question, but are you sure replicating
> the whole database isn't an option? I know you've described the
> database as huge, but one mans huge is another mans insignificant (or
> the other way around).
> It just sounds like you're putting in a lot of work when you may be
> able to keep it simple. Apologies if this is a path you've already
> worn smooth, just wondering what has made you sure that bog standard
> replication isn't the way to go.
> Damien- Hide quoted text -
> - Show quoted text -
|||Anthony Paul wrote:
> I would *love* to have a full replication going and not have to worry
> about the added complexity of creating dynamic filters or triggers,
> but the powers that be simply do not consider it an option. Since I'm
> not the one that makes the decisions, I can only go by whatever
> options are available. They want ONLY a subset of data to be captured,
> nothing more. That would be fine with me if the filter was static (ie.
> not subject to change every time a new report is requested) but given
> the requirements I am in the same camp as you that a full replication
> would be best.
TPTB may start considering it an option if you give them a cost
analysis, depending on whether their previous motivation was "wouldn't
it be nice if" (yes, but) or "we think this is cheaper" (no it isn't)
or "this is required for security reasons" (ugh, okay) or whatever.
|||Anthony Paul (anthonypaulo@.gmail.com) writes:
> I would *love* to have a full replication going and not have to worry
> about the added complexity of creating dynamic filters or triggers,
> but the powers that be simply do not consider it an option. Since I'm
> not the one that makes the decisions, I can only go by whatever
> options are available. They want ONLY a subset of data to be captured,
> nothing more. That would be fine with me if the filter was static (ie.
> not subject to change every time a new report is requested) but given
> the requirements I am in the same camp as you that a full replication
> would be best.
From my meager experience of replication, it seems clear that the database
has to be really huge - several terabytes - to make a dynamic filtering
defensible from a cost perspective. It would be difficult to develop,
difficult to maintain and manage.
The only serious option I see to full replication is a static subset.
That is define what will be supported in replication V1. If a new reqiure-
ment that is not covered, it would have to wait to V2. The idea would
of course to only strip really big stuff with low proability to be included.
And this is what you should tell the powers that be: replicating the entire
database will be far less expensive than changing what is replicated
dynamically.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx