Thursday, March 29, 2012
DBA guru advice - changes to production database
I need to find out how to be able to change a production database, i.e.,
adding a column to a table that will necessitate changes to multiple stored
procedures, without shutting down the application that depends on the
database (or at least shutting it down for the least amount of time
possible). How are 24 X 7 production database changes normally made?
Can anyone explain the procedure to me and/or provide links to reading
material that explains how to do this?
Any help will be greatly appreciated. I am a programmer and these are
entirely new waters for me!
Sandy
Stuff like this is usually done through a documented process. You develop
the scripts in a dev environment and then apply them on a copy of production
in a test environment. If you are adding a NOT NULL column to a production
table, this can take a very long time, if you have many millions of rows.
You may have to XXX the column as NULL and then populate the table
iteratively. This, too, should be tested on a copy of prod first.
Tom
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
SQL Server MVP
Toronto, ON Canada
..
"Sandy" <Sandy@.discussions.microsoft.com> wrote in message
news:19C65DEA-BF61-4B4D-88D9-67BA62C828A6@.microsoft.com...
Hello -
I need to find out how to be able to change a production database, i.e.,
adding a column to a table that will necessitate changes to multiple stored
procedures, without shutting down the application that depends on the
database (or at least shutting it down for the least amount of time
possible). How are 24 X 7 production database changes normally made?
Can anyone explain the procedure to me and/or provide links to reading
material that explains how to do this?
Any help will be greatly appreciated. I am a programmer and these are
entirely new waters for me!
Sandy
|||"Sandy" <Sandy@.discussions.microsoft.com> wrote in message
news:19C65DEA-BF61-4B4D-88D9-67BA62C828A6@.microsoft.com...
> Hello -
> I need to find out how to be able to change a production database, i.e.,
> adding a column to a table that will necessitate changes to multiple
> stored
> procedures, without shutting down the application that depends on the
> database (or at least shutting it down for the least amount of time
> possible). How are 24 X 7 production database changes normally made?
> Can anyone explain the procedure to me and/or provide links to reading
> material that explains how to do this?
> Any help will be greatly appreciated. I am a programmer and these are
> entirely new waters for me!
> --
> Sandy
|||Thanks for your response, Tom.
Is the copy of the production database in the test environment generally on
a different server?
How do I then update the production server with the changes, after I work
with them on the test server?
I don't understand how I can run Alter Procedure and Alter Table scripts on
a 24 X 7 production server without mega problems, unless I shut down the
server for at least the time it takes for the scripts to run. Can you
clarify a little bit?
Sandy
"Tom Moreau" wrote:
> Stuff like this is usually done through a documented process. You develop
> the scripts in a dev environment and then apply them on a copy of production
> in a test environment. If you are adding a NOT NULL column to a production
> table, this can take a very long time, if you have many millions of rows.
> You may have to XXX the column as NULL and then populate the table
> iteratively. This, too, should be tested on a copy of prod first.
> --
> Tom
> ----
> Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
> SQL Server MVP
> Toronto, ON Canada
> ..
> "Sandy" <Sandy@.discussions.microsoft.com> wrote in message
> news:19C65DEA-BF61-4B4D-88D9-67BA62C828A6@.microsoft.com...
> Hello -
> I need to find out how to be able to change a production database, i.e.,
> adding a column to a table that will necessitate changes to multiple stored
> procedures, without shutting down the application that depends on the
> database (or at least shutting it down for the least amount of time
> possible). How are 24 X 7 production database changes normally made?
> Can anyone explain the procedure to me and/or provide links to reading
> material that explains how to do this?
> Any help will be greatly appreciated. I am a programmer and these are
> entirely new waters for me!
> --
> Sandy
>
|||"Sandy" <Sandy@.discussions.microsoft.com> wrote in message
news:19C65DEA-BF61-4B4D-88D9-67BA62C828A6@.microsoft.com...
> Hello -
> I need to find out how to be able to change a production database, i.e.,
> adding a column to a table that will necessitate changes to multiple
> stored
> procedures, without shutting down the application that depends on the
> database (or at least shutting it down for the least amount of time
> possible). How are 24 X 7 production database changes normally made?
"It depends".
Partly on the scale of the database, the changes to be made, and the
resources available.
Ok, that doesn't help much I know.
Adding a column:
Depends on if the column has defaults or not. If you add a column with
defaults, then you're almost certainly going to lock the table for some
amount of time you may find unacceptable.
Now, if your stored procedures and queries are written correctly, adding a
column to a table should not be an issue. In other words you don't do
"select *" any place.
Now, for the stored procs, generally I've had "luck" doing what I needed in
a batch. Worse case though, have a script that sets the database to single
user mode, executes your script and sets it back. But watch out for errors.
It can be "bad" to have 1/2 your stored procs update, but not the other 1/2.
;-)
Another option is if you have a replicated copy or log-shipped copy of the
database you can use, is move traffic there, make changes and move back.
The problem then is handling transactions that occurred in the meantime.
> Can anyone explain the procedure to me and/or provide links to reading
> material that explains how to do this?
I don't unfortunately. In my last job I did this sort of thing all the
time, but also knew the system pretty well (but still messed up a few times
;-)
Contact me if you need more advice.
> Any help will be greatly appreciated. I am a programmer and these are
> entirely new waters for me!
> --
> Sandy
Greg Moore
SQL Server DBA Consulting Remote and Onsite available!
Email: sql (at) greenms.com http://www.greenms.com/sqlserver.html
|||"Tom Moreau" <tom@.dont.spam.me.cips.ca> wrote in message
news:OErxvtacHHA.4984@.TK2MSFTNGP05.phx.gbl...
> Stuff like this is usually done through a documented process. You develop
> the scripts in a dev environment and then apply them on a copy of
> production
> in a test environment. If you are adding a NOT NULL column to a
> production
> table, this can take a very long time, if you have many millions of rows.
> You may have to XXX the column as NULL and then populate the table
> iteratively. This, too, should be tested on a copy of prod first.
Grr, Tom's right. I meant NOT NULL, not DEFAULT in my post.
And I'll also emphasize the point he made.. do this on a copy first if you
can.
Test out your procedure as much as possible.
And have a roll-back plan!
> --
> Tom
> ----
> Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
> SQL Server MVP
> Toronto, ON Canada
> .
> "Sandy" <Sandy@.discussions.microsoft.com> wrote in message
> news:19C65DEA-BF61-4B4D-88D9-67BA62C828A6@.microsoft.com...
> Hello -
> I need to find out how to be able to change a production database, i.e.,
> adding a column to a table that will necessitate changes to multiple
> stored
> procedures, without shutting down the application that depends on the
> database (or at least shutting it down for the least amount of time
> possible). How are 24 X 7 production database changes normally made?
> Can anyone explain the procedure to me and/or provide links to reading
> material that explains how to do this?
> Any help will be greatly appreciated. I am a programmer and these are
> entirely new waters for me!
> --
> Sandy
>
Greg Moore
SQL Server DBA Consulting Remote and Onsite available!
Email: sql (at) greenms.com http://www.greenms.com/sqlserver.html
|||Typically, you don't have a test/pre-prod database on the same physical
server. Rather, you have a separate box, of identical hardware (ideally).
This way, if your testing produces some undesirable effect, it's not
affecting production.
Whatever script you intend to run in prod - including an ALTER TABLE
script - would be run here. Once you are satisfied that it is working
properly, then run the same script in prod.
An ALTER PROC statement can be run in prod (after testing, of course). This
typically takes no time and if there is an error, the ALTER PROC fails and
you have the original proc.
Tom
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
SQL Server MVP
Toronto, ON Canada
..
"Sandy" <Sandy@.discussions.microsoft.com> wrote in message
news:6B577153-11DD-4DB4-AA7C-3CDFCE9926A7@.microsoft.com...
Thanks for your response, Tom.
Is the copy of the production database in the test environment generally on
a different server?
How do I then update the production server with the changes, after I work
with them on the test server?
I don't understand how I can run Alter Procedure and Alter Table scripts on
a 24 X 7 production server without mega problems, unless I shut down the
server for at least the time it takes for the scripts to run. Can you
clarify a little bit?
Sandy
"Tom Moreau" wrote:
> Stuff like this is usually done through a documented process. You develop
> the scripts in a dev environment and then apply them on a copy of
> production
> in a test environment. If you are adding a NOT NULL column to a
> production
> table, this can take a very long time, if you have many millions of rows.
> You may have to XXX the column as NULL and then populate the table
> iteratively. This, too, should be tested on a copy of prod first.
> --
> Tom
> ----
> Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
> SQL Server MVP
> Toronto, ON Canada
> ..
> "Sandy" <Sandy@.discussions.microsoft.com> wrote in message
> news:19C65DEA-BF61-4B4D-88D9-67BA62C828A6@.microsoft.com...
> Hello -
> I need to find out how to be able to change a production database, i.e.,
> adding a column to a table that will necessitate changes to multiple
> stored
> procedures, without shutting down the application that depends on the
> database (or at least shutting it down for the least amount of time
> possible). How are 24 X 7 production database changes normally made?
> Can anyone explain the procedure to me and/or provide links to reading
> material that explains how to do this?
> Any help will be greatly appreciated. I am a programmer and these are
> entirely new waters for me!
> --
> Sandy
>
|||"Tom Moreau" <tom@.dont.spam.me.cips.ca> wrote in message
news:OVRU8FbcHHA.4888@.TK2MSFTNGP02.phx.gbl...
> Typically, you don't have a test/pre-prod database on the same physical
> server. Rather, you have a separate box, of identical hardware (ideally).
> This way, if your testing produces some undesirable effect, it's not
> affecting production.
To expand upon this, we had a dev environment, a QA, and a prod. And
sometimes even setup a "staging" environment.
Dev code could be all over the place. QA typically mirrored Prod, or real
close to it. (We'd track differences).
Staging was used for major changes or ones we expected to be more difficult.
Typically we'd take a log-shipped copy of the Production environment. That
way we'd know we were dealing with up to date data AND the same amount of
data. (Dev, QA were typically far smaller data sets.)
> Whatever script you intend to run in prod - including an ALTER TABLE
> script - would be run here. Once you are satisfied that it is working
> properly, then run the same script in prod.
> An ALTER PROC statement can be run in prod (after testing, of course).
> This
> typically takes no time and if there is an error, the ALTER PROC fails and
> you have the original proc.
Ayup. Typically in my experience, the biggest problem was getting my ALTER
PROC statement to run.
Often it was blocked by 100s of calls to the existing one that had been made
before it.
We typically created a change plan.
In it we'd detail the changes to be made (and often why).
The plan to be followed.
The tests to make sure that it succeeded.
And very importantly, the "what if something goes wrong?" plan.
Sometimes that was trivial sometimes it was fairly complex.
If it was a single procedure and the "what if" was "it didn't get applied"
then we'd do nothing.
Or worse, it was applied, but we found out the new code syntactically was
correct, but was much slower than we could afford it to be, what would we do
then?
The big question really comes down to how long the alter table scripts will
run.
Another option (which does NOT work really in a replicated environment) is
create a NEW table with all the data, fields you want, and then rename the
old one and then the new one.
This can be useful in some cases, but not all.
> --
> Tom
> ----
> Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
> SQL Server MVP
> Toronto, ON Canada
> .
> "Sandy" <Sandy@.discussions.microsoft.com> wrote in message
> news:6B577153-11DD-4DB4-AA7C-3CDFCE9926A7@.microsoft.com...
> Thanks for your response, Tom.
> Is the copy of the production database in the test environment generally
> on
> a different server?
> How do I then update the production server with the changes, after I work
> with them on the test server?
> I don't understand how I can run Alter Procedure and Alter Table scripts
> on
> a 24 X 7 production server without mega problems, unless I shut down the
> server for at least the time it takes for the scripts to run. Can you
> clarify a little bit?
>
> --
> Sandy
>
Greg Moore
SQL Server DBA Consulting Remote and Onsite available!
Email: sql (at) greenms.com http://www.greenms.com/sqlserver.html
|||Thanks for responding, Greg -
>In other words you don't do "select *" any place.
What do you mean by the above?
What do you mean by: ". . . have a script that sets the database to single
user mode, executes your script and sets it back"?
Also, what is your best suggestion for a "rollback plan"?
Any help is greatly appreciated!
Sandy
"Greg D. Moore (Strider)" wrote:
> "Sandy" <Sandy@.discussions.microsoft.com> wrote in message
> news:19C65DEA-BF61-4B4D-88D9-67BA62C828A6@.microsoft.com...
> "It depends".
> Partly on the scale of the database, the changes to be made, and the
> resources available.
> Ok, that doesn't help much I know.
> Adding a column:
> Depends on if the column has defaults or not. If you add a column with
> defaults, then you're almost certainly going to lock the table for some
> amount of time you may find unacceptable.
> Now, if your stored procedures and queries are written correctly, adding a
> column to a table should not be an issue. In other words you don't do
> "select *" any place.
> Now, for the stored procs, generally I've had "luck" doing what I needed in
> a batch. Worse case though, have a script that sets the database to single
> user mode, executes your script and sets it back. But watch out for errors.
> It can be "bad" to have 1/2 your stored procs update, but not the other 1/2.
> ;-)
>
> Another option is if you have a replicated copy or log-shipped copy of the
> database you can use, is move traffic there, make changes and move back.
> The problem then is handling transactions that occurred in the meantime.
>
> I don't unfortunately. In my last job I did this sort of thing all the
> time, but also knew the system pretty well (but still messed up a few times
> ;-)
> Contact me if you need more advice.
>
> --
> Greg Moore
> SQL Server DBA Consulting Remote and Onsite available!
> Email: sql (at) greenms.com http://www.greenms.com/sqlserver.html
>
>
|||"Sandy" <Sandy@.discussions.microsoft.com> wrote in message
news:A0393922-029D-4B8D-AAD9-D79F8547556A@.microsoft.com...
> Thanks for responding, Greg -
>
Anytime you do a select in production code, you want to explicitely provide
the columns you are selecting.
Some programmers, being lazy will say write a select as;
SELECT * FROM tablename where Foo='bar'
This is wrong on several levels:
1) It probably brings back too much data, which is slower and consumes
bandwidth, etc.
2) It assumes the order columns are returned is constant, which is
safe...99% of the time. But not 100%
3) It assumes the number of columns is fixed. So if you add a column to the
underlying table, the query now returns one more row, which the calling code
now has to deal with. If you specify columns you only need to update the
selects that need the extra column.
> What do you mean by the above?
> What do you mean by: ". . . have a script that sets the database to
> single
> user mode, executes your script and sets it back"?
ALTER DATABASE testing SET SINGLE_USER WITH ROLLBACK IMMEDIATE
<your scripts here>
ALTER DATABASE testing SET MULTI_USER
That simply assures no one else is doing anything while your scripts run.
> Also, what is your best suggestion for a "rollback plan"?
Planning :-)
Seriously, this is a fairly complex question and depends a lot on your
situation. It's not something I can answer too easily over a newsgroup.
But basically think about what can go wrong and how to mitigate the
problems.
Create a script to add your columns and update your stored procs.
Create a script to REMOVE your columns and restore your old stored procs.
Just in case.
Stuff like that.
> Any help is greatly appreciated!
> --
> Sandy
Greg Moore
SQL Server DBA Consulting Remote and Onsite available!
Email: sql (at) greenms.com http://www.greenms.com/sqlserver.html
sql
Db2, Nolock
I am a new user for DB2.
In DB2 V8, i have a table with some no of columns.
table test
col1
col2
col3
col4
Now i want to issue a query like this.
select col1 from test with(NOLOCK) where col1 = <some value>
when i am executing this query on that time i am getting a error message like this...
SQL0158N The number of columns specified for "MT.TEST" is not the
same as the number of columns in the result table. SQLSTATE=42811
Anyone can help me?
i tried to find out.
I did the same thing with SQL server 2000.
I am not facing any problem there.
Thanks & Regards,
MuthuYou should post it on DB2 forum and this is related to SQL Server.
Tuesday, March 27, 2012
DB2 linked server issues
I have a DB2 linked server using DB2OLEDB from MS. the DB2 table does not have journaling enabled, so all of my data creation and update statements are failing when using OpenQuery. Is there a way to get my data mod statements to execute on DB2 without requiring journaling? I have tried using ODBC and setting the Commit Level to none, but it appears that SQL server overrides that setting when connecting through the Linked Server. Any ideas?
Have you tried using
a Control Panel ODBC System DSN entry as your data source? This old school
connection technique fixed my numerous db2 linked server problems. I suspect
there is a bug in the IBM drivers that goes away when you configure the
connection like this.
It should also fix
your settings override problem and give you access to even more settings to tinker
with!
Thanks,
Greg Van Mullem
we have tried using an ODBC DSN as the source for the Linked Server, but it still has the same problems. Somehow SQL Server seems to force the "transaction log/journaling" upon the ODBC source as well.
|||If you are running the Developer or Enterprise Edition
of SQL Server you could try this feature pack download from Microsoft.
Unfortunately, it does not work with SQL Standard Edition so I have not
been able to try it.
Microsoft
OLEDB Provider for DB2
Thanks,
Greg Van Mullem|||Have you tried the new EXEC AT syntax with linked server in SQL Server 2005?
DB2 Linked Server
select * from servername.catalog.user.tablename
i get the following error msg:
Server: Msg 7399, Level 16, State 1, Line 1
OLE DB provider 'MSDASQL' reported an error. The provider does not support the necessary method.
[OLE/DB provider returned message: Method is not supported by this provider.]
OLE DB error trace [OLE/DB Provider 'MSDASQL' IDBSchemaRowset::GetRowset returned 0x80040e53: The provider does not support the necessary method.].
Any way to solve this?What OLE DB Provider are you using? I have had no luck with the standard ones provided by IBM; I have heard of others who have used 3rd party drivers with great success.
Regards,
hmscott
Originally posted by vmlal
I have setup a linked DB2 server and i c the table/view list in EM>Security. But in QA when i run
select * from servername.catalog.user.tablename
i get the following error msg:
Server: Msg 7399, Level 16, State 1, Line 1
OLE DB provider 'MSDASQL' reported an error. The provider does not support the necessary method.
[OLE/DB provider returned message: Method is not supported by this provider.]
OLE DB error trace [OLE/DB Provider 'MSDASQL' IDBSchemaRowset::GetRowset returned 0x80040e53: The provider does not support the necessary method.].
Any way to solve this?|||I'm using the OLE DB Provider as follows:
srvproduct: DB2OLEDB
providername: MSDASQL
I have installed the right drivers for DB2 etc.
What other 3rd party tools would be helpful? any ideas?
thx|||Sorry it took me so long to get back to you. I had to dig through my e-mails to find the link. Go to www.hitsw.com. On their home page, look in the middle third for a link to "SQL Middleware".
I have not used their products, but a friend of mine has. He demonstrated the capability to link a DB2 server using the OLE DB provider from Hit Software.
I hope this helps.
Regards,
hmscott
Originally posted by vmlal
I'm using the OLE DB Provider as follows:
srvproduct: DB2OLEDB
providername: MSDASQL
I have installed the right drivers for DB2 etc.
What other 3rd party tools would be helpful? any ideas?
thx|||Originally posted by vmlal
I have setup a linked DB2 server and i c the table/view list in EM>Security. But in QA when i run
select * from servername.catalog.user.tablename
i get the following error msg:
Server: Msg 7399, Level 16, State 1, Line 1
OLE DB provider 'MSDASQL' reported an error. The provider does not support the necessary method.
[OLE/DB provider returned message: Method is not supported by this provider.]
OLE DB error trace [OLE/DB Provider 'MSDASQL' IDBSchemaRowset::GetRowset returned 0x80040e53: The provider does not support the necessary method.].
Any way to solve this?
Use an openquery instead.|||Originally posted by snail
Use an openquery instead.
Doesnt work with openquery either.
PS: The DB2 is on AIX boxsql
DB2 and SQL 2005
HTH, jens Suessmeyer.
http://www.sqlserver2005.de
db1.dbo.table to db2.other.table
I need help/guidance setting up replication across dbs.
db1 has the same tables as db2. tables have the same fields.
db1 has "dbo.table" schema, db2 has "other.table" schema.
I setup simple replication and db1.dbo.tables are being added to db2 as
db2.dbo.tables instead of just refreshing data on db2.other.tables...
Any help would be greatly appreciated.
http://www.dbazine.com/sql/sql-articles/cotter1
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
<Doug M> wrote in message
news:1058678B-8239-423C-B68D-C17EC9E882A5@.microsoft.com...
> Hi,
> I need help/guidance setting up replication across dbs.
> db1 has the same tables as db2. tables have the same fields.
> db1 has "dbo.table" schema, db2 has "other.table" schema.
> I setup simple replication and db1.dbo.tables are being added to db2 as
> db2.dbo.tables instead of just refreshing data on db2.other.tables...
> Any help would be greatly appreciated.
>
db_owner role and table owner issues
I have given a user db_owner role in a database. When he creates a table using Enterprise manager the table owner is dbo. When he creates a table using Query Analyzer the table owner is the user. eg
Enterprise Manager = dbo.Table1
Query Analyer = username.Table1
This causes a problem when the user is writing web applications. Is this an error in the way i have set up permissions ? How can i make them behave the same way?
Thanks for your help.First, don't create tables in EM
Second, it's how they are connecting...
Third, depending on how you want the application to work, qualify the owner...
Fourth, get control...
have them supply you with the DDL, and you create the tables for them...
My guess is that EM is connected with sa, and QA is connecting with their id...
just a guess...|||Thanks for your quick response. I should have included the following information.
I work in a University computing department where students must learn to create tables etc. I could not create their tables for them as it is part of their assessment (and there are 1700 students!)
Students can only log on to the server using windows authentication so will log on using the same windows account to both EM and QA.
Students are taught to use both EM and QA which is why they are finding problems.
Thanks again for all help.|||Gotta test it...which way do you want the tables qualified for their apps...
dbo?
I'll look into it...|||yes dbo thank you.
Sunday, March 25, 2012
db_owner problem
I recently registered my hosted database server on my pc. When I ran aspnet_regsql.exe the table created have dbo as schema, but when I create a news table throught stored procedure it has my username as schema.
When I buildt my site on my pc all tables have dbo as schema.
How do I make all tables on my hosted server db have same schema?
Thanks
The very Newbie
Thanks for your reply Iori_Jay.
Where and how in Management Studio do I make sure that default schema is MyUsername(current database user) instead of getting schema "dbo" for some tables and schema "MyUsername" for other tables?
When I ran aspnet_regsql.exe all tables and procedures have schema dbo allthough I use my username to create them.
Thanks again
The Very Newbie
We learn from masters
Thanks again Iori_Jay,
In the security folder of my database there are 2 users: dbo and myself. I need to transfer schema, but don´t know how to make it work. My hosted server uses sql server 2000.
Thanks
|||
In SQL 2000 SHCEMA is not a seperated object, it equals to the owner of the object. So you need to change to owner of the object using such command:
EXEC sp_changeobjectowner 'myself.tbl1', 'dbo'
|||Thanks a lotIori_Jay.
It works fine.
sqldb_datareader for new tables
sp_helprotect [table name]
and see if someone has denied access to the table for some reason.|||According to Microsoft (http://msdn.microsoft.com/library/default.asp?url=/library/en-us/architec/8_ar_da_3xns.asp) a member of db_datareader can read every table, past, present, and future.
-PatP|||Explicity denying access to a table will trump db_datareader, but I don't think anything else does.|||in an unrelated note
how about creating views for all those users.
hmmmmmm?
views have many advantages over direct table access.
they add a security layer between the user and the table
they mask database complexity
they can increase read performance
and they can give you a layer between the root object and the user so object name changes can occur without recompillation of the application.
just a thought.|||I know what you mean, Curt. The db_datareader role makes it awful hard to add things like a salary table to your database, too ;-).
But then, when was the last time someone actually thought about security, anyway. I mean without the DBA storming over to his cube?|||i wont give a database to a developer until i explain the importance of views and stored procedures.
Thursday, March 22, 2012
db timestamp
DB Statistics
I am an old UNIFY guy and I am doing some work on SQL2000 for a customer.
Is there an easy way to get DB/table statistics, table name, rows, last
accessed.
Thanks for any direction and assistance.
George
Hi,
sp_monitor -- Gives the SQL Server statistics
sp_helpdb <dbname> -- Gives you the details of database size, Options set,
FIle group information , File details...
Table -- Query the sysobjects table for name, created date
Number of rows -- sp_spaceused <table name>
Last accessesed date for table will not be stored by default , you may neeed
to write a trigger or use Log explorer from Lumigent.com
sp_configure -- Gives you all the parameter list and its values.
Thanks
Hari
MCDBA
"george collins" <george@.nospan.com> wrote in message
news:e44UTyvJEHA.620@.TK2MSFTNGP10.phx.gbl...
> Help please
> I am an old UNIFY guy and I am doing some work on SQL2000 for a customer.
> Is there an easy way to get DB/table statistics, table name, rows, last
> accessed.
> Thanks for any direction and assistance.
> George
>
DB Statistics
I am an old UNIFY guy and I am doing some work on SQL2000 for a customer.
Is there an easy way to get DB/table statistics, table name, rows, last
accessed.
Thanks for any direction and assistance.
GeorgeHi,
sp_monitor -- Gives the SQL Server statistics
sp_helpdb <dbname> -- Gives you the details of database size, Options set,
FIle group information , File details...
Table -- Query the sysobjects table for name, created date
Number of rows -- sp_spaceused <table name>
Last accessesed date for table will not be stored by default , you may neeed
to write a trigger or use Log explorer from Lumigent.com
sp_configure -- Gives you all the parameter list and its values.
Thanks
Hari
MCDBA
"george collins" <george@.nospan.com> wrote in message
news:e44UTyvJEHA.620@.TK2MSFTNGP10.phx.gbl...
> Help please
> I am an old UNIFY guy and I am doing some work on SQL2000 for a customer.
> Is there an easy way to get DB/table statistics, table name, rows, last
> accessed.
> Thanks for any direction and assistance.
> George
>sql
DB Statistics
I am an old UNIFY guy and I am doing some work on SQL2000 for a customer.
Is there an easy way to get DB/table statistics, table name, rows, last
accessed.
Thanks for any direction and assistance.
GeorgeHi,
sp_monitor -- Gives the SQL Server statistics
sp_helpdb <dbname> -- Gives you the details of database size, Options set,
FIle group information , File details...
Table -- Query the sysobjects table for name, created date
Number of rows -- sp_spaceused <table name>
Last accessesed date for table will not be stored by default , you may neeed
to write a trigger or use Log explorer from Lumigent.com
sp_configure -- Gives you all the parameter list and its values.
Thanks
Hari
MCDBA
"george collins" <george@.nospan.com> wrote in message
news:e44UTyvJEHA.620@.TK2MSFTNGP10.phx.gbl...
> Help please
> I am an old UNIFY guy and I am doing some work on SQL2000 for a customer.
> Is there an easy way to get DB/table statistics, table name, rows, last
> accessed.
> Thanks for any direction and assistance.
> George
>
Wednesday, March 21, 2012
DB Role Security
My mananger has asked me to see if it possible to allow
developers the rights to modify table structures only, and
not create, delete tables, and not create, modify and
delete views store procedures.
I am going to tell him "No" as I will need to give the
developers dlladmin access rights which allows full object
modification rights.
What do people think ?
PeterYou are partially right - you can grant the "Create Table" permission only,
which includes Alter and Drop Table permissions, but does not include
permissions to create, alter or drop views, functions and procedures.
--
Dejan Sarka, SQL Server MVP
FAQ from Neil & others at: http://www.sqlserverfaq.com
Please reply only to the newsgroups.
PASS - the definitive, global community
for SQL Server professionals - http://www.sqlpass.org
"Peter" <nospam@.thisemailaddress.co.uk> wrote in message
news:0b9301c36253$57729300$a601280a@.phx.gbl...
> Hello,
> My mananger has asked me to see if it possible to allow
> developers the rights to modify table structures only, and
> not create, delete tables, and not create, modify and
> delete views store procedures.
> I am going to tell him "No" as I will need to give the
> developers dlladmin access rights which allows full object
> modification rights.
> What do people think ?
> Peter|||Thanks Dejan,
Where do I set that option ?
Thanks
Peter
>--Original Message--
>You are partially right - you can grant the "Create
Table" permission only,
>which includes Alter and Drop Table permissions, but does
not include
>permissions to create, alter or drop views, functions and
procedures.
>--
>Dejan Sarka, SQL Server MVP
>FAQ from Neil & others at: http://www.sqlserverfaq.com
>Please reply only to the newsgroups.
>PASS - the definitive, global community
>for SQL Server professionals - http://www.sqlpass.org
>"Peter" <nospam@.thisemailaddress.co.uk> wrote in message
>news:0b9301c36253$57729300$a601280a@.phx.gbl...
>> Hello,
>> My mananger has asked me to see if it possible to allow
>> developers the rights to modify table structures only,
and
>> not create, delete tables, and not create, modify and
>> delete views store procedures.
>> I am going to tell him "No" as I will need to give the
>> developers dlladmin access rights which allows full
object
>> modification rights.
>> What do people think ?
>> Peter
>
>.
>|||Not sure on this, but may be possible to make developers
data reader and data writer (if relevant) and give explicit
GRANT ALTER TABLE TO <username>
"Peter" <nospam@.thisemailaddress.co.uk> wrote in message
news:0b9301c36253$57729300$a601280a@.phx.gbl...
> Hello,
> My mananger has asked me to see if it possible to allow
> developers the rights to modify table structures only, and
> not create, delete tables, and not create, modify and
> delete views store procedures.
> I am going to tell him "No" as I will need to give the
> developers dlladmin access rights which allows full object
> modification rights.
> What do people think ?
> Petersql
Sunday, March 11, 2012
DB Replication or Table Replication via triggers?
I am involved in a scenario where there is a huge (SQL Server 2005)
production database containing tables that are updated multiple times
per second. End-user reports need to be generated against the data in
this database, and so the powers-that-be came to the conclusion that a
reporting database is necessary in order to offload report processing
from production; of course, this means that data will have to be
replicated to the reporting database. However, we do not need all of
the data in the production database, and perhaps a filtering criteria
can be established where only certain rows are replicated over to the
reporting database as they're inserted (and possibly updated/deleted).
The current though process is that the programmers designing the
queries/reports will know exactly what data they need from production
and be able to modify the replication criteria as needed. For example,
programmer A might write a report where the data he needs can be
expressed in a simple replication criteria for table T where column X
= "WOOD" and column Y = "MAHOGANY". Programmer B might come along a
month later and write a report whose relies on the same table T where
column X = "METAL" and column Z in (12, 24, 36). Programmer B will
have to modify Programmer A's replication criteria in such a way as to
accomodate both reports, in this case something like "Copy rows from
table T where (col X = "WOOD" and col Y = "MAHOGANY") or (col X =
"METAL" and col Z in (12, 24, 36))". The example I gave is really
trivial of course but is sufficient to give you an idea of what the
current thought-process is.
I assume that this is a requirement that many of you may have
encountered in the past and I am wondering what solutions you were
able to come up with. Personally, I believe that the above method is
prone to error (in this case the use of triggers to specify
replication criteria) and I'd much rather use replication services to
copy tables in their entirety. However, this does not seem to be an
option in my case due to the sheer size of certain tables. Is there
anything out there that performs replication based on complex
programmer defined criteria? Are triggers a viable alternative? Any
alternative out-of-the-box solutions?
Any feedback would be appreciated.
Regards!
AnthonyI'd recommend transactional replication with a nosync initialization. This
is where the initial setup on the reporting server is achieved by using a
restore of the database and after that, only subsequent changes are sent
down. If you are using SQL Server 2005, greater concurrency can be achieved
by using the read committed snapshot isolation level.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com|||Hello Paul,
Thank you for your response. I will look into SQL Server 2005's
replication and static row level filtering; are there any books or web-
sites you might recommend? I will need to be able to set up and modify
the row-filter criteria programatically, and the reviews @. Barnes and
Noble on "Pro SQL Server 2005 Replication" are pretty dismal.
Regards,
Anthony
On Apr 17, 3:09 pm, "Paul Ibison" <Paul.Ibi...@.Pygmalion.Comwrote:
Quote:
Originally Posted by
I'd recommend transactional replication with a nosync initialization. This
is where the initial setup on the reporting server is achieved by using a
restore of the database and after that, only subsequent changes are sent
down. If you are using SQL Server 2005, greater concurrency can be achieved
by using the read committed snapshot isolation level.
Cheers,
Paul Ibison SQL Server MVP,www.replicationanswers.com
after more merge info and don't like the Pro book then it's really BOL that
you need and then doing some scenarios for yourself to gain experience. As
for websites, I have some useful info on the site below and there are other
articles out on the various SQL Server sites you can get by googling, but
nothing I think specific to your requirements.
BTW this doesn't really lend itself to modifying the filters dynamically. At
least this is not as straightforward as you might think. Normally the
partitions are well designed to start with. If you want something more
dynamic, then I'd not filter at all in replication and I'd use filters on
the client application instead.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com|||Hello Paul,
On Apr 17, 6:26 pm, "Paul Ibison" <Paul.Ibi...@.Pygmalion.Comwrote:
Quote:
Originally Posted by
Have a look at Hilary's book for snapshot and transactional, but if you're
after more merge info and don't like the Pro book then it's really BOL that
you need and then doing some scenarios for yourself to gain experience. As
for websites, I have some useful info on the site below and there are other
articles out on the various SQL Server sites you can get by googling, but
nothing I think specific to your requirements.
Will do.
Quote:
Originally Posted by
BTW this doesn't really lend itself to modifying the filters dynamically. At
least this is not as straightforward as you might think. Normally the
partitions are well designed to start with. If you want something more
dynamic, then I'd not filter at all in replication and I'd use filters on
the client application instead.
Cheers,
Paul Ibison SQL Server MVP,www.replicationanswers.com
Ahhh... then that's a problem, I'd definitely need the ability to be
able to programatically and dynamically change the filtering criteria
as the need arises, in this case every time a new report is requested
that needs a subset of data not being captured by the replication
process. You would think that this is such a common scenario... Also,
filtering on the client side is not an option either since that would
mean that all of the data would get replicated to the reporting db. I
could have sworn that I read in msdn that the filters could be changed
via stored procs though... I'll have to look that up.
Thanks for your help Paul!
Anthony|||Anthony Paul wrote:
Quote:
Originally Posted by
I am involved in a scenario where there is a huge (SQL Server 2005)
production database containing tables that are updated multiple times
per second. End-user reports need to be generated against the data in
this database, and so the powers-that-be came to the conclusion that a
reporting database is necessary in order to offload report processing
from production; of course, this means that data will have to be
replicated to the reporting database. However, we do not need all of
the data in the production database, and perhaps a filtering criteria
can be established where only certain rows are replicated over to the
reporting database as they're inserted (and possibly updated/deleted).
The current though process is that the programmers designing the
queries/reports will know exactly what data they need from production
and be able to modify the replication criteria as needed. For example,
programmer A might write a report where the data he needs can be
expressed in a simple replication criteria for table T where column X
= "WOOD" and column Y = "MAHOGANY". Programmer B might come along a
month later and write a report whose relies on the same table T where
column X = "METAL" and column Z in (12, 24, 36). Programmer B will
have to modify Programmer A's replication criteria in such a way as to
accomodate both reports, in this case something like "Copy rows from
table T where (col X = "WOOD" and col Y = "MAHOGANY") or (col X =
"METAL" and col Z in (12, 24, 36))". The example I gave is really
trivial of course but is sufficient to give you an idea of what the
current thought-process is.
>
I assume that this is a requirement that many of you may have
encountered in the past and I am wondering what solutions you were
able to come up with. Personally, I believe that the above method is
prone to error (in this case the use of triggers to specify
replication criteria) and I'd much rather use replication services to
copy tables in their entirety. However, this does not seem to be an
option in my case due to the sheer size of certain tables. Is there
anything out there that performs replication based on complex
programmer defined criteria? Are triggers a viable alternative? Any
alternative out-of-the-box solutions?
Is it possible to create views, then configure things so that just
those views are replicated as tables on the second server?|||Hello Ed,
That's a very good idea, if it turns out that the filter isn't
dynamically configurable then perhaps using a view as a filter and
replicating the view can compensate since views can be modified at any
time. However, I doubt that replication can be done on a view rather
than on a table. I'll have to check it out!
Regards,
Anthony
On Apr 17, 9:46 pm, Ed Murphy <emurph...@.socal.rr.comwrote:
Quote:
Originally Posted by
Anthony Paul wrote:
Quote:
Originally Posted by
I am involved in a scenario where there is a huge (SQL Server 2005)
production database containing tables that are updated multiple times
per second. End-user reports need to be generated against the data in
this database, and so the powers-that-be came to the conclusion that a
reporting database is necessary in order to offload report processing
from production; of course, this means that data will have to be
replicated to the reporting database. However, we do not need all of
the data in the production database, and perhaps a filtering criteria
can be established where only certain rows are replicated over to the
reporting database as they're inserted (and possibly updated/deleted).
The current though process is that the programmers designing the
queries/reports will know exactly what data they need from production
and be able to modify the replication criteria as needed. For example,
programmer A might write a report where the data he needs can be
expressed in a simple replication criteria for table T where column X
= "WOOD" and column Y = "MAHOGANY". Programmer B might come along a
month later and write a report whose relies on the same table T where
column X = "METAL" and column Z in (12, 24, 36). Programmer B will
have to modify Programmer A's replication criteria in such a way as to
accomodate both reports, in this case something like "Copy rows from
table T where (col X = "WOOD" and col Y = "MAHOGANY") or (col X =
"METAL" and col Z in (12, 24, 36))". The example I gave is really
trivial of course but is sufficient to give you an idea of what the
current thought-process is.
>
Quote:
Originally Posted by
I assume that this is a requirement that many of you may have
encountered in the past and I am wondering what solutions you were
able to come up with. Personally, I believe that the above method is
prone to error (in this case the use of triggers to specify
replication criteria) and I'd much rather use replication services to
copy tables in their entirety. However, this does not seem to be an
option in my case due to the sheer size of certain tables. Is there
anything out there that performs replication based on complex
programmer defined criteria? Are triggers a viable alternative? Any
alternative out-of-the-box solutions?
>
Is it possible to create views, then configure things so that just
those views are replicated as tables on the second server?- Hide quoted text -
>
- Show quoted text -|||I just finished looking up using indexed views versus a filter for
replication and it turns out that the view is much slower than a
filter (about 3x as slow) because the log reader has to log each
transaction twice, once for the view and once for the table. In my
case performance is of utmost concern so the overhead involved in this
is not something they can live with. Sigh...
Quote:
Originally Posted by
Quote:
Originally Posted by
Is it possible to create views, then configure things so that just
those views are replicated as tables on the second server?- Hide quoted text -
>
Quote:
Originally Posted by
- Show quoted text -- Hide quoted text -
>
- Show quoted text -|||On Apr 18, 12:24 am, Anthony Paul <anthonypa...@.gmail.comwrote:
Quote:
Originally Posted by
Hello Paul,
>
On Apr 17, 6:26 pm, "Paul Ibison" <Paul.Ibi...@.Pygmalion.Comwrote:
>
Quote:
Originally Posted by
Have a look at Hilary's book for snapshot and transactional, but if you're
after more merge info and don't like the Pro book then it's really BOL that
you need and then doing some scenarios for yourself to gain experience. As
for websites, I have some useful info on the site below and there are other
articles out on the various SQL Server sites you can get by googling, but
nothing I think specific to your requirements.
>
Will do.
>
Quote:
Originally Posted by
BTW this doesn't really lend itself to modifying the filters dynamically. At
least this is not as straightforward as you might think. Normally the
partitions are well designed to start with. If you want something more
dynamic, then I'd not filter at all in replication and I'd use filters on
the client application instead.
Cheers,
Paul Ibison SQL Server MVP,www.replicationanswers.com
>
Ahhh... then that's a problem, I'd definitely need the ability to be
able to programatically and dynamically change the filtering criteria
as the need arises, in this case every time a new report is requested
that needs a subset of data not being captured by the replication
process. You would think that this is such a common scenario... Also,
filtering on the client side is not an option either since that would
mean that all of the data would get replicated to the reporting db. I
could have sworn that I read in msdn that the filters could be changed
via stored procs though... I'll have to look that up.
>
Thanks for your help Paul!
>
Anthony
This may sound like a stupid question, but are you sure replicating
the whole database isn't an option? I know you've described the
database as huge, but one mans huge is another mans insignificant (or
the other way around).
It just sounds like you're putting in a lot of work when you may be
able to keep it simple. Apologies if this is a path you've already
worn smooth, just wondering what has made you sure that bog standard
replication isn't the way to go.
Damien|||Hello Damien,
I would *love* to have a full replication going and not have to worry
about the added complexity of creating dynamic filters or triggers,
but the powers that be simply do not consider it an option. Since I'm
not the one that makes the decisions, I can only go by whatever
options are available. They want ONLY a subset of data to be captured,
nothing more. That would be fine with me if the filter was static (ie.
not subject to change every time a new report is requested) but given
the requirements I am in the same camp as you that a full replication
would be best.
Regards,
Anthony
Quote:
Originally Posted by
This may sound like a stupid question, but are you sure replicating
the whole database isn't an option? I know you've described the
database as huge, but one mans huge is another mans insignificant (or
the other way around).
>
It just sounds like you're putting in a lot of work when you may be
able to keep it simple. Apologies if this is a path you've already
worn smooth, just wondering what has made you sure that bog standard
replication isn't the way to go.
>
Damien- Hide quoted text -
>
- Show quoted text -
Quote:
Originally Posted by
I would *love* to have a full replication going and not have to worry
about the added complexity of creating dynamic filters or triggers,
but the powers that be simply do not consider it an option. Since I'm
not the one that makes the decisions, I can only go by whatever
options are available. They want ONLY a subset of data to be captured,
nothing more. That would be fine with me if the filter was static (ie.
not subject to change every time a new report is requested) but given
the requirements I am in the same camp as you that a full replication
would be best.
TPTB may start considering it an option if you give them a cost
analysis, depending on whether their previous motivation was "wouldn't
it be nice if" (yes, but) or "we think this is cheaper" (no it isn't)
or "this is required for security reasons" (ugh, okay) or whatever.|||Anthony Paul (anthonypaulo@.gmail.com) writes:
Quote:
Originally Posted by
I would *love* to have a full replication going and not have to worry
about the added complexity of creating dynamic filters or triggers,
but the powers that be simply do not consider it an option. Since I'm
not the one that makes the decisions, I can only go by whatever
options are available. They want ONLY a subset of data to be captured,
nothing more. That would be fine with me if the filter was static (ie.
not subject to change every time a new report is requested) but given
the requirements I am in the same camp as you that a full replication
would be best.
From my meager experience of replication, it seems clear that the database
has to be really huge - several terabytes - to make a dynamic filtering
defensible from a cost perspective. It would be difficult to develop,
difficult to maintain and manage.
The only serious option I see to full replication is a static subset.
That is define what will be supported in replication V1. If a new reqiure-
ment that is not covered, it would have to wait to V2. The idea would
of course to only strip really big stuff with low proability to be included.
And this is what you should tell the powers that be: replicating the entire
database will be far less expensive than changing what is replicated
dynamically.
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx
DB Replication or Table Replication via triggers?
I am involved in a scenario where there is a huge (SQL Server 2005)
production database containing tables that are updated multiple times
per second. End-user reports need to be generated against the data in
this database, and so the powers-that-be came to the conclusion that a
reporting database is necessary in order to offload report processing
from production; of course, this means that data will have to be
replicated to the reporting database. However, we do not need all of
the data in the production database, and perhaps a filtering criteria
can be established where only certain rows are replicated over to the
reporting database as they're inserted (and possibly updated/deleted).
The current though process is that the programmers designing the
queries/reports will know exactly what data they need from production
and be able to modify the replication criteria as needed. For example,
programmer A might write a report where the data he needs can be
expressed in a simple replication criteria for table T where column X
= "WOOD" and column Y = "MAHOGANY". Programmer B might come along a
month later and write a report whose relies on the same table T where
column X = "METAL" and column Z in (12, 24, 36). Programmer B will
have to modify Programmer A's replication criteria in such a way as to
accomodate both reports, in this case something like "Copy rows from
table T where (col X = "WOOD" and col Y = "MAHOGANY") or (col X =
"METAL" and col Z in (12, 24, 36))". The example I gave is really
trivial of course but is sufficient to give you an idea of what the
current thought-process is.
I assume that this is a requirement that many of you may have
encountered in the past and I am wondering what solutions you were
able to come up with. Personally, I believe that the above method is
prone to error (in this case the use of triggers to specify
replication criteria) and I'd much rather use replication services to
copy tables in their entirety. However, this does not seem to be an
option in my case due to the sheer size of certain tables. Is there
anything out there that performs replication based on complex
programmer defined criteria? Are triggers a viable alternative? Any
alternative out-of-the-box solutions?
Any feedback would be appreciated.
Regards!
Anthony
Hello Paul,
Thank you for your response. I will look into SQL Server 2005's
replication and static row level filtering; are there any books or web-
sites you might recommend? I will need to be able to set up and modify
the row-filter criteria programatically, and the reviews @. Barnes and
Noble on "Pro SQL Server 2005 Replication" are pretty dismal.
Regards,
Anthony
On Apr 17, 3:09 pm, "Paul Ibison" <Paul.Ibi...@.Pygmalion.Com> wrote:
> I'd recommend transactional replication with a nosync initialization. This
> is where the initial setup on the reporting server is achieved by using a
> restore of the database and after that, only subsequent changes are sent
> down. If you are using SQL Server 2005, greater concurrency can be achieved
> by using the read committed snapshot isolation level.
> Cheers,
> Paul Ibison SQL Server MVP,www.replicationanswers.com
|||Hello Paul,
On Apr 17, 6:26 pm, "Paul Ibison" <Paul.Ibi...@.Pygmalion.Com> wrote:
> Have a look at Hilary's book for snapshot and transactional, but if you're
> after more merge info and don't like the Pro book then it's really BOL that
> you need and then doing some scenarios for yourself to gain experience. As
> for websites, I have some useful info on the site below and there are other
> articles out on the various SQL Server sites you can get by googling, but
> nothing I think specific to your requirements.
Will do.
> BTW this doesn't really lend itself to modifying the filters dynamically. At
> least this is not as straightforward as you might think. Normally the
> partitions are well designed to start with. If you want something more
> dynamic, then I'd not filter at all in replication and I'd use filters on
> the client application instead.
> Cheers,
> Paul Ibison SQL Server MVP,www.replicationanswers.com
Ahhh... then that's a problem, I'd definitely need the ability to be
able to programatically and dynamically change the filtering criteria
as the need arises, in this case every time a new report is requested
that needs a subset of data not being captured by the replication
process. You would think that this is such a common scenario... Also,
filtering on the client side is not an option either since that would
mean that all of the data would get replicated to the reporting db. I
could have sworn that I read in msdn that the filters could be changed
via stored procs though... I'll have to look that up.
Thanks for your help Paul!
Anthony
|||Anthony Paul wrote:
> I am involved in a scenario where there is a huge (SQL Server 2005)
> production database containing tables that are updated multiple times
> per second. End-user reports need to be generated against the data in
> this database, and so the powers-that-be came to the conclusion that a
> reporting database is necessary in order to offload report processing
> from production; of course, this means that data will have to be
> replicated to the reporting database. However, we do not need all of
> the data in the production database, and perhaps a filtering criteria
> can be established where only certain rows are replicated over to the
> reporting database as they're inserted (and possibly updated/deleted).
> The current though process is that the programmers designing the
> queries/reports will know exactly what data they need from production
> and be able to modify the replication criteria as needed. For example,
> programmer A might write a report where the data he needs can be
> expressed in a simple replication criteria for table T where column X
> = "WOOD" and column Y = "MAHOGANY". Programmer B might come along a
> month later and write a report whose relies on the same table T where
> column X = "METAL" and column Z in (12, 24, 36). Programmer B will
> have to modify Programmer A's replication criteria in such a way as to
> accomodate both reports, in this case something like "Copy rows from
> table T where (col X = "WOOD" and col Y = "MAHOGANY") or (col X =
> "METAL" and col Z in (12, 24, 36))". The example I gave is really
> trivial of course but is sufficient to give you an idea of what the
> current thought-process is.
> I assume that this is a requirement that many of you may have
> encountered in the past and I am wondering what solutions you were
> able to come up with. Personally, I believe that the above method is
> prone to error (in this case the use of triggers to specify
> replication criteria) and I'd much rather use replication services to
> copy tables in their entirety. However, this does not seem to be an
> option in my case due to the sheer size of certain tables. Is there
> anything out there that performs replication based on complex
> programmer defined criteria? Are triggers a viable alternative? Any
> alternative out-of-the-box solutions?
Is it possible to create views, then configure things so that just
those views are replicated as tables on the second server?
|||Hello Ed,
That's a very good idea, if it turns out that the filter isn't
dynamically configurable then perhaps using a view as a filter and
replicating the view can compensate since views can be modified at any
time. However, I doubt that replication can be done on a view rather
than on a table. I'll have to check it out!
Regards,
Anthony
On Apr 17, 9:46 pm, Ed Murphy <emurph...@.socal.rr.com> wrote:
> Anthony Paul wrote:
>
> Is it possible to create views, then configure things so that just
> those views are replicated as tables on the second server... Hide quoted text -
> - Show quoted text -
|||I just finished looking up using indexed views versus a filter for
replication and it turns out that the view is much slower than a
filter (about 3x as slow) because the log reader has to log each
transaction twice, once for the view and once for the table. In my
case performance is of utmost concern so the overhead involved in this
is not something they can live with. Sigh...
>
> - Show quoted text -
|||On Apr 18, 12:24 am, Anthony Paul <anthonypa...@.gmail.com> wrote:
> Hello Paul,
> On Apr 17, 6:26 pm, "Paul Ibison" <Paul.Ibi...@.Pygmalion.Com> wrote:
>
> Will do.
>
> Ahhh... then that's a problem, I'd definitely need the ability to be
> able to programatically and dynamically change the filtering criteria
> as the need arises, in this case every time a new report is requested
> that needs a subset of data not being captured by the replication
> process. You would think that this is such a common scenario... Also,
> filtering on the client side is not an option either since that would
> mean that all of the data would get replicated to the reporting db. I
> could have sworn that I read in msdn that the filters could be changed
> via stored procs though... I'll have to look that up.
> Thanks for your help Paul!
> Anthony
This may sound like a stupid question, but are you sure replicating
the whole database isn't an option? I know you've described the
database as huge, but one mans huge is another mans insignificant (or
the other way around).
It just sounds like you're putting in a lot of work when you may be
able to keep it simple. Apologies if this is a path you've already
worn smooth, just wondering what has made you sure that bog standard
replication isn't the way to go.
Damien
|||Hello Damien,
I would *love* to have a full replication going and not have to worry
about the added complexity of creating dynamic filters or triggers,
but the powers that be simply do not consider it an option. Since I'm
not the one that makes the decisions, I can only go by whatever
options are available. They want ONLY a subset of data to be captured,
nothing more. That would be fine with me if the filter was static (ie.
not subject to change every time a new report is requested) but given
the requirements I am in the same camp as you that a full replication
would be best.
Regards,
Anthony
> This may sound like a stupid question, but are you sure replicating
> the whole database isn't an option? I know you've described the
> database as huge, but one mans huge is another mans insignificant (or
> the other way around).
> It just sounds like you're putting in a lot of work when you may be
> able to keep it simple. Apologies if this is a path you've already
> worn smooth, just wondering what has made you sure that bog standard
> replication isn't the way to go.
> Damien- Hide quoted text -
> - Show quoted text -
|||Anthony Paul wrote:
> I would *love* to have a full replication going and not have to worry
> about the added complexity of creating dynamic filters or triggers,
> but the powers that be simply do not consider it an option. Since I'm
> not the one that makes the decisions, I can only go by whatever
> options are available. They want ONLY a subset of data to be captured,
> nothing more. That would be fine with me if the filter was static (ie.
> not subject to change every time a new report is requested) but given
> the requirements I am in the same camp as you that a full replication
> would be best.
TPTB may start considering it an option if you give them a cost
analysis, depending on whether their previous motivation was "wouldn't
it be nice if" (yes, but) or "we think this is cheaper" (no it isn't)
or "this is required for security reasons" (ugh, okay) or whatever.
|||Anthony Paul (anthonypaulo@.gmail.com) writes:
> I would *love* to have a full replication going and not have to worry
> about the added complexity of creating dynamic filters or triggers,
> but the powers that be simply do not consider it an option. Since I'm
> not the one that makes the decisions, I can only go by whatever
> options are available. They want ONLY a subset of data to be captured,
> nothing more. That would be fine with me if the filter was static (ie.
> not subject to change every time a new report is requested) but given
> the requirements I am in the same camp as you that a full replication
> would be best.
From my meager experience of replication, it seems clear that the database
has to be really huge - several terabytes - to make a dynamic filtering
defensible from a cost perspective. It would be difficult to develop,
difficult to maintain and manage.
The only serious option I see to full replication is a static subset.
That is define what will be supported in replication V1. If a new reqiure-
ment that is not covered, it would have to wait to V2. The idea would
of course to only strip really big stuff with low proability to be included.
And this is what you should tell the powers that be: replicating the entire
database will be far less expensive than changing what is replicated
dynamically.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
DB Redesign - use junction/xref table?
current db. I have the following tables:
CREATE TABLE [CUSTOMERS] (
[CID] [int] IDENTITY (1, 1) NOT NULL ,
[customerName] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[customerID] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[address] [varchar] (200) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[city] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[state] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[zip] [varchar] (20) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[phone] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[email] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[fax] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[businessName] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[companyID] [varchar] (25) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
) ON [PRIMARY]
GO
CREATE TABLE [TRANSACTIONS] (
[Transaction_ID] [int] NOT NULL ,
[CID] [int] NULL ,
[Customer_ID] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[Customer_name] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[Total_Collection_Amount] [money] NULL ,
[Account_Type] [varchar] (25) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[user_name] [varchar] (25) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
) ON [PRIMARY]
GO
CREATE TABLE [ADMIN] (
[user_name] [varchar] (25) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[company_ID] [varchar] (25) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
) ON [PRIMARY]
GO
Now, there are multiple orphaned records in TRANSACTIONS (I will be using
TRANSACTIONS.CID = CUSTOMERS.CID, which is a new column I added to the TRANS
table). I can't rely on using the CustomerID field to join them as identica
l
Customers.CustomerID exist (even when grouping by Customers.CompanyID).
The problem I'm facing is that I have multiple records in the TRANS table
for the same Customer, but have different Company_IDs. These are the orphane
d
records I speak of that do not have a corresponding record in Customers. Th
e
Company_ID is derived by joining Transactions.User_Name = Admin.User_Name.
An example of a record in my Transactions table:
Tran_ID Customer_Name Admin.CompanyID
123 Home Repair R9
124 Home Repair R11
The company I work for operates under different dba (doing business as)
names. That's how this has happened (hence a customer_name having multiple
Company_IDs). What I plan to do is insert Distinct
Transactions.Customer_Name into Customers, retrieve the CID then populate
this value into theTransactions.CID field. Then I can simply remove the
Transactions.Customer_Name field. Now without adding two records to my
Customers table, how can I overcome this design flaw?As frightened as I am by the concept of having a many to many relationship
between transactions and customers, if that is what you need, then you will
need to have a xref table with the customerId and transactionId. It seems
to me that what needs to be done is have a company table, then a customer or
doingBusinessAs table that relates to the transaction. More thought needs
to go into your design.
Keep in mind the key part of database design, every table should represent
one thing. This is the basis of normalization, and it seems to me that your
customers table, and even your transactions table might be representing > 1
thing at a time, which generally will cause you problems like this as too
many things relate to too many things.
ou probably ought to standardize your names customerName customer_name, only
one naming style (clearly you probably only want to see that attribute once
in the db anyhow.)
The same concern is with CID and Transaction_ID or how about TransactionId.
I like TransactionId, but the key is to not make your users guess how
something will be named.
I know this is kind of a lot to swallow at once, but think about this
statement:
> I'm currently redesigning our db. Mind you I was NOT the designer of the
> current db.
The goal will be to not have the next person say the same about you :)
----
Louis Davidson - http://spaces.msn.com/members/drsql/
SQL Server MVP
"Arguments are to be avoided: they are always vulgar and often convincing."
(Oscar Wilde)
"Eric" <Eric@.discussions.microsoft.com> wrote in message
news:4E996864-39D1-4866-B2C4-71AA3991C005@.microsoft.com...
> I'm currently redesigning our db. Mind you I was NOT the designer of the
> current db. I have the following tables:
> CREATE TABLE [CUSTOMERS] (
> [CID] [int] IDENTITY (1, 1) NOT NULL ,
> [customerName] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [customerID] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [address] [varchar] (200) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [city] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [state] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [zip] [varchar] (20) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [phone] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [email] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [fax] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [businessName] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [companyID] [varchar] (25) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> ) ON [PRIMARY]
> GO
>
> CREATE TABLE [TRANSACTIONS] (
> [Transaction_ID] [int] NOT NULL ,
> [CID] [int] NULL ,
> [Customer_ID] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [Customer_name] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [Total_Collection_Amount] [money] NULL ,
> [Account_Type] [varchar] (25) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [user_name] [varchar] (25) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> ) ON [PRIMARY]
> GO
> CREATE TABLE [ADMIN] (
> [user_name] [varchar] (25) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
> [company_ID] [varchar] (25) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL
> ,
> ) ON [PRIMARY]
> GO
> Now, there are multiple orphaned records in TRANSACTIONS (I will be using
> TRANSACTIONS.CID = CUSTOMERS.CID, which is a new column I added to the
> TRANS
> table). I can't rely on using the CustomerID field to join them as
> identical
> Customers.CustomerID exist (even when grouping by Customers.CompanyID).
> The problem I'm facing is that I have multiple records in the TRANS table
> for the same Customer, but have different Company_IDs. These are the
> orphaned
> records I speak of that do not have a corresponding record in Customers.
> The
> Company_ID is derived by joining Transactions.User_Name = Admin.User_Name.
> An example of a record in my Transactions table:
> Tran_ID Customer_Name Admin.CompanyID
> 123 Home Repair R9
> 124 Home Repair R11
> The company I work for operates under different dba (doing business as)
> names. That's how this has happened (hence a customer_name having
> multiple
> Company_IDs). What I plan to do is insert Distinct
> Transactions.Customer_Name into Customers, retrieve the CID then populate
> this value into theTransactions.CID field. Then I can simply remove the
> Transactions.Customer_Name field. Now without adding two records to my
> Customers table, how can I overcome this design flaw?
>|||You seem to have a composite candidate key ( customerID, companyID ) which
uniquely identify a customer in a transaction, right? If they are
duplicated then you should start over.
Generally, it is impossible to give you an accurate solution unless your
business model is familiar to others in this forum. However based on your
narrative one could reasonably conclude that you have an under-normalized
schema. In other words, you have various instances where multiple entity
types are bundled up into single table, for instance your transaction table
seems to have information about both transactions as well as customers, and
perhaps about accounts as well.
Unless, your business model and rules are thoroughly analyzed, it is hard to
provide any substantial advice. In the meantime, consider learning the data
design fundamentals and apply them to the business model in hand. If this is
time critical, considering a professional hire might be worth it - that last
statement in Louis' post has the gist.
Anith|||Louis - thanks for your input. Just some follow up:
"It seems to me that what needs to be done is have a company table..."
There actually already is one. However, the original db designer (who I
might add is no longer w/the company), decided to join Transactions.User_Nam
e
= Admin.User_Name, where the Admin table also contains the user's CompanyID.
So each user in Admin belongs to a Company_ID (so then Admin.Company_ID =
Company.Company_ID)
"...the key part of database design, every table should represent
one thing." I understand this, hence it's why I'm now redesigning it.
"The goal will be to not have the next person say the same about you :)" I
couldn't agree w/you more.
As for a proposed xref table, are you suggesting something like this:
Xref Table Columns:
XID
CID
CompanyID
So now, my Customers table will no longer have a CompanyID. Instead the
relationship will be Customers.CID = XREF.CID Next, I will have a new colum
n
in my TRANSACTIONS table, so that XREF.XID = TRANSACTIONS.XID. Some sample
data:
Customers Table:
CID CustomerName
2 A1 Home
XREF Table
XID CID CompanyID
33 2 R9
34 2 R11
Transactions Table
TranID XID Amount
1 33 $1.00
2 34 $1.25
Thanks for your help
"Louis Davidson" wrote:
> As frightened as I am by the concept of having a many to many relationship
> between transactions and customers, if that is what you need, then you wil
l
> need to have a xref table with the customerId and transactionId. It seems
> to me that what needs to be done is have a company table, then a customer
or
> doingBusinessAs table that relates to the transaction. More thought need
s
> to go into your design.
> Keep in mind the key part of database design, every table should represent
> one thing. This is the basis of normalization, and it seems to me that yo
ur
> customers table, and even your transactions table might be representing >
1
> thing at a time, which generally will cause you problems like this as too
> many things relate to too many things.
> ou probably ought to standardize your names customerName customer_name, on
ly
> one naming style (clearly you probably only want to see that attribute onc
e
> in the db anyhow.)
> The same concern is with CID and Transaction_ID or how about TransactionId
.
> I like TransactionId, but the key is to not make your users guess how
> something will be named.
> I know this is kind of a lot to swallow at once, but think about this
> statement:
>
> The goal will be to not have the next person say the same about you :)
> --
> ----
--
> Louis Davidson - http://spaces.msn.com/members/drsql/
> SQL Server MVP
> "Arguments are to be avoided: they are always vulgar and often convincing.
"
> (Oscar Wilde)
> "Eric" <Eric@.discussions.microsoft.com> wrote in message
> news:4E996864-39D1-4866-B2C4-71AA3991C005@.microsoft.com...
>
>|||Actually, from your data here:
> Customers Table:
> CID CustomerName
> 2 A1 Home
> XREF Table
> XID CID CompanyID
> 33 2 R9
> 34 2 R11
> Transactions Table
> TranID XID Amount
> 1 33 $1.00
> 2 34 $1.25
The xref table is not really a simple many to many table. It is more of a
company allocation. It works, I think, since now both transactions are
allocated to customer 2, but tran1 is for their company r9, and tran2 is for
company r11.
--
----
Louis Davidson - http://spaces.msn.com/members/drsql/
SQL Server MVP
"Arguments are to be avoided: they are always vulgar and often convincing."
(Oscar Wilde)
"Eric" <Eric@.discussions.microsoft.com> wrote in message
news:204589FB-CC82-40D6-A4F8-ED4EA500E7A0@.microsoft.com...
> Louis - thanks for your input. Just some follow up:
> "It seems to me that what needs to be done is have a company table..."
> There actually already is one. However, the original db designer (who I
> might add is no longer w/the company), decided to join
> Transactions.User_Name
> = Admin.User_Name, where the Admin table also contains the user's
> CompanyID.
> So each user in Admin belongs to a Company_ID (so then Admin.Company_ID =
> Company.Company_ID)
> "...the key part of database design, every table should represent
> one thing." I understand this, hence it's why I'm now redesigning it.
> "The goal will be to not have the next person say the same about you :)"
> I
> couldn't agree w/you more.
> As for a proposed xref table, are you suggesting something like this:
> Xref Table Columns:
> XID
> CID
> CompanyID
> So now, my Customers table will no longer have a CompanyID. Instead the
> relationship will be Customers.CID = XREF.CID Next, I will have a new
> column
> in my TRANSACTIONS table, so that XREF.XID = TRANSACTIONS.XID. Some
> sample
> data:
> Customers Table:
> CID CustomerName
> 2 A1 Home
> XREF Table
> XID CID CompanyID
> 33 2 R9
> 34 2 R11
> Transactions Table
> TranID XID Amount
> 1 33 $1.00
> 2 34 $1.25
> Thanks for your help
> "Louis Davidson" wrote:
>|||Find the guy that did this and kill him.
Almost every VARCHAR(n) is totally wrong or absurd. There are not
keys. All columns can be NULL, so you cannot ever have keys.
CHAR(20) as a ZIP code' Everything is a VARCHAR(<< magic number >> )
in this world. Give me an example of that stuff. The rest of the
stinking crap uses "magic numbers: like VARCHAR(50) for anything.
Codes without validation, etc.
Columns are not fields!! This is FOUNDATIONS of RDBMS!! And the
definition of an identifier is that it is unique to each entity. This
is a disaster without any hope of data integrity.
You need to throw the whole damn thing and start over. Other people
will tell you the same thing in a nicer way (i.e. "As frightened as I
am by the concept of having .."), but I tend to be blunt.|||--CELKO-- wrote:
>Other people
>will tell you the same thing in a nicer way (i.e. "As frightened as I
>am by the concept of having .."), but I tend to be blunt.
>
LOL!
Understatement of the century.
*mike hodgson*
http://sqlnerd.blogspot.com|||Actually, it was a woman.
"--CELKO--" wrote:
> Find the guy that did this and kill him.
> Almost every VARCHAR(n) is totally wrong or absurd. There are not
> keys. All columns can be NULL, so you cannot ever have keys.
> CHAR(20) as a ZIP code' Everything is a VARCHAR(<< magic number >> )
> in this world. Give me an example of that stuff. The rest of the
> stinking crap uses "magic numbers: like VARCHAR(50) for anything.
> Codes without validation, etc.
>
> Columns are not fields!! This is FOUNDATIONS of RDBMS!! And the
> definition of an identifier is that it is unique to each entity. This
> is a disaster without any hope of data integrity.
> You need to throw the whole damn thing and start over. Other people
> will tell you the same thing in a nicer way (i.e. "As frightened as I
> am by the concept of having .."), but I tend to be blunt.
>|||It's hard to believe that people are still modeling basic customer tables
and relationships from scratch. It's like re-developing a bubble sort
algorithm or re-inventing the wheel.
"--CELKO--" <jcelko212@.earthlink.net> wrote in message
news:1137741113.315703.103120@.g44g2000cwa.googlegroups.com...
> Find the guy that did this and kill him.
> Almost every VARCHAR(n) is totally wrong or absurd. There are not
> keys. All columns can be NULL, so you cannot ever have keys.
> CHAR(20) as a ZIP code' Everything is a VARCHAR(<< magic number >> )
> in this world. Give me an example of that stuff. The rest of the
> stinking crap uses "magic numbers: like VARCHAR(50) for anything.
> Codes without validation, etc.
>
> Columns are not fields!! This is FOUNDATIONS of RDBMS!! And the
> definition of an identifier is that it is unique to each entity. This
> is a disaster without any hope of data integrity.
> You need to throw the whole damn thing and start over. Other people
> will tell you the same thing in a nicer way (i.e. "As frightened as I
> am by the concept of having .."), but I tend to be blunt.
>|||Could you provide a link to a standard design?
"JT" <someone@.microsoft.com> wrote in message
news:eo%23aP6dHGHA.3448@.TK2MSFTNGP10.phx.gbl...
> It's hard to believe that people are still modeling basic customer tables
> and relationships from scratch. It's like re-developing a bubble sort
> algorithm or re-inventing the wheel.
> "--CELKO--" <jcelko212@.earthlink.net> wrote in message
> news:1137741113.315703.103120@.g44g2000cwa.googlegroups.com...
>
DB Questions
happen..?
2. How to insert a not null column in an existing table with
records..?
3. If a table is deleted, what will happen for the Stored procedures
and Views, which that table reffered..?1. TempDB will be re-created automatically when you restart SQLServer. You
can't delete TempDB while SQLServer is running.
2. Either assign a DEFAULT value to the column, or make it nullable to start
with, populate the column and then alter it to NOT NULL.
3. Dependent SPs and views won't be deleted but they will generate an error
when they are used.
--
David Portas
SQL Server MVP
--
DB question: Copying ID
Hi,
I'm trying to insert one value (an order) into the table Order and (via a for-loop) all the products in that order in the table Product, hence, one order can have multiple products (and must have at least one). I have an automatically increased value for the OrderID as the primary key for Order, and I have a foreign key named OrderID in the Product table. So far, I _think_ everything's logically correct.
However, I don't understand how to retrieve the OrderID to be able to insert it in the Product table upon insertion. I guess this is done all the time, but the only solution I can think of is to make a new SQL Command, asking for the just created OrderID to use it in the SQL Command for the products' for-loop. I'm sure that's a bad idea. :-)
Can I use relationships or so to make this automatically updated (that is, to have the Product table "check for" the OrderID and insert the OrderID upon insertion of the Product row(s))?
I hope this is clear to you. Thanks in advance for all help!
Pettrer
If you want to get the last ID after insertion then use the Scope_Identity() to get that.
Regards
DB Question
I am taking over a system that uses triggers exclusively for all
auditing entries. Here is a sample of the table structure:
Accounts
=========================
AccountId [Identity] [PK]
AccountName
......
AccountHistory
============================
AccountHistId [Identity] [PK]
AccountId (fk)
HistoryId (fk)
History
=========================
HistoryId [Identity] [PK]
UserId (fk)
HistorySummaryDesc
EventDateTime
HistoryDetails
===============================
HistoryDetailId [Identity] [PK]
HistoryId (fk)
There is a trigger on the Accounts DB that determines the level of
auditing set up by the user (None, Summary, Detail). If set to none, the
trigger does nothing. If set to Summary, a new row is inserted into the
History table with a summary description, then a new row is added to the
AccountHistory table pointing to the new History table row (the reason
is that the History table is used to hold ALL history changes everywhere
and the AccountHistory table is just used as an intersection table.
Here's my problem. This customer now want to be able to tell from PB
what the resulting AccountHistory ID was from the Insert/Update that was
issued from PB. I can't think of a reliable way to do this. Moving the
History generation into a SP call made from DB is not an option. They
require it to stay Trigger based.
Since this in a 100+ user environment, I can't simply expect to do a
SELECT on the AccountHistory table for the AccountId for the latest
entry, that may or may not be correct.
Does anyone have any suggestions?
Thank you
John DHi
I find your post confusing!
See http://www.aspfaq.com/etiquett__e.asp?id=5006 on how to post usable DDL
and the trigger code. Example data as insert statements
http://vyaskn.tripod.com/code.__htm#inserts would also help.
At a guess you will need to store some form of transaction identifier.
John
"John" <JohnD_NO_SPAM@.DelWare.com> wrote in message
news:%23Um8MeuMFHA.2384@.tk2msftngp13.phx.gbl...
> Hey Everyone,
> I am taking over a system that uses triggers exclusively for all auditing
> entries. Here is a sample of the table structure:
> Accounts
> =========================
> AccountId [Identity] [PK]
> AccountName
> ......
>
> AccountHistory
> ============================
> AccountHistId [Identity] [PK]
> AccountId (fk)
> HistoryId (fk)
>
> History
> =========================
> HistoryId [Identity] [PK]
> UserId (fk)
> HistorySummaryDesc
> EventDateTime
>
> HistoryDetails
> ===============================
> HistoryDetailId [Identity] [PK]
> HistoryId (fk)
>
> There is a trigger on the Accounts DB that determines the level of
> auditing set up by the user (None, Summary, Detail). If set to none, the
> trigger does nothing. If set to Summary, a new row is inserted into the
> History table with a summary description, then a new row is added to the
> AccountHistory table pointing to the new History table row (the reason is
> that the History table is used to hold ALL history changes everywhere and
> the AccountHistory table is just used as an intersection table.
> Here's my problem. This customer now want to be able to tell from PB what
> the resulting AccountHistory ID was from the Insert/Update that was
> issued from PB. I can't think of a reliable way to do this. Moving the
> History generation into a SP call made from DB is not an option. They
> require it to stay Trigger based.
> Since this in a 100+ user environment, I can't simply expect to do a
> SELECT on the AccountHistory table for the AccountId for the latest entry,
> that may or may not be correct.
> Does anyone have any suggestions?
> Thank you
> John D|||>> I am taking over a system that uses triggers exclusively for all
auditing entries. Here is a sample of the table structure: <<
So based on the DDL that you did not post, it looks like the idiot who
did this uses IDENTITY instead of proper keys so you have no data
integrity and weak temporal model.
option. They
require it to stay Trigger based. <<
Boy, they hate RDBMS with a passion, don't they? Can you talk them
into buying a audit tool which is what they should have had in the
first place?|||--CELKO-- wrote:
> auditing entries. Here is a sample of the table structure: <<
> So based on the DDL that you did not post, it looks like the idiot who
> did this uses IDENTITY instead of proper keys so you have no data
> integrity and weak temporal model.
>
> option. They
> require it to stay Trigger based. <<
> Boy, they hate RDBMS with a passion, don't they? Can you talk them
> into buying a audit tool which is what they should have had in the
> first place?
>
I suggested that they purchase a log tool such as ApexSQL, but they
insist on being able to examine the audit entries from the app itself.
I can make some minor table adjustments, but a hard requirement is for
it to stay trigger based.
John|||>> I suggested that they purchase a log tool such as ApexSQL, but they
insist on being able to examine the audit entries from the app itself.
<<
Why? Doesn't that give the guy who broke the rules the ability to see
the audit trail he wants to cover up? And the log tool would be
cheaper in the long run than the extreme overhead of using and
maintaiing so many triggers.
Have they looked into generational concurrency databases like
Interbase? They would get the audit and the ability to rollback to a
point in time automatically and it would not interfere with normal
operations.
Oh well, time to look for another company to work for, I guess.|||Then you have a real dilemma.. If the data necessary to satisfy this
requirement is not accessible by this trigger, and you're not allowed t ouse
anything but triggers...
You don't need to be a rocket surgeon to figure that one out.
A) You need to use something other than a trigger... How much of the
functional architecture will lie within a trigger, and how much will be
outside is then the only choice to be made... or...
B) you have to get them to reconsider the rules... or...
C) you get to slow leak them for as long as possible while you look for
other work for a more reasonable employer...
"John" wrote:
> --CELKO-- wrote:
> I suggested that they purchase a log tool such as ApexSQL, but they
> insist on being able to examine the audit entries from the app itself.
> I can make some minor table adjustments, but a hard requirement is for
> it to stay trigger based.
> John
>|||> Here's my problem. This customer now want to be able to tell from PB what
> the resulting AccountHistory ID was from the Insert/Update that was
> issued from PB. I can't think of a reliable way to do this. Moving the
> History generation into a SP call made from DB is not an option. They
> require it to stay Trigger based.
How on earth will they use this information? Is this going to tell the user
what they have changed after they do it?
Don't you have the key of the table you have just modified? Can't you use
it, plus the userId who made the change?
One thing you might try is to simply return values from your trigger. The
app would then have to respond to results/messages being sent back. I would
use raiserror with a known number and state and let the client deal with it.
For example, add a trigger (which aren't all bad if you structure them
well.) to the AccountHistory table that raises an error with the new row
number. This will be easy to implement, and the database will have very
little responsibilty in the change. The app will have to become smart
enough to use the error message as informational and keep going.
----
Louis Davidson - drsql@.hotmail.com
SQL Server MVP
Compass Technology Management - www.compass.net
Pro SQL Server 2000 Database Design -
http://www.apress.com/book/bookDisplay.html?bID=266
Blog - http://spaces.msn.com/members/drsql/
Note: Please reply to the newsgroups only unless you are interested in
consulting services. All other replies may be ignored :)
"John" <JohnD_NO_SPAM@.DelWare.com> wrote in message
news:%23Um8MeuMFHA.2384@.tk2msftngp13.phx.gbl...
> Hey Everyone,
> I am taking over a system that uses triggers exclusively for all auditing
> entries. Here is a sample of the table structure:
> Accounts
> =========================
> AccountId [Identity] [PK]
> AccountName
> ......
>
> AccountHistory
> ============================
> AccountHistId [Identity] [PK]
> AccountId (fk)
> HistoryId (fk)
>
> History
> =========================
> HistoryId [Identity] [PK]
> UserId (fk)
> HistorySummaryDesc
> EventDateTime
>
> HistoryDetails
> ===============================
> HistoryDetailId [Identity] [PK]
> HistoryId (fk)
>
> There is a trigger on the Accounts DB that determines the level of
> auditing set up by the user (None, Summary, Detail). If set to none, the
> trigger does nothing. If set to Summary, a new row is inserted into the
> History table with a summary description, then a new row is added to the
> AccountHistory table pointing to the new History table row (the reason is
> that the History table is used to hold ALL history changes everywhere and
> the AccountHistory table is just used as an intersection table.
> Here's my problem. This customer now want to be able to tell from PB what
> the resulting AccountHistory ID was from the Insert/Update that was
> issued from PB. I can't think of a reliable way to do this. Moving the
> History generation into a SP call made from DB is not an option. They
> require it to stay Trigger based.
> Since this in a 100+ user environment, I can't simply expect to do a
> SELECT on the AccountHistory table for the AccountId for the latest entry,
> that may or may not be correct.
> Does anyone have any suggestions?
> Thank you
> John D