Showing posts with label hii. Show all posts
Showing posts with label hii. Show all posts

Thursday, March 29, 2012

ENDPOINTs in SQL Server 2005 April CTP

Hi

I've been happily using VS2005 Beta 1 to access ENDPOINTs on SQL Server 2005 Beta 2 for a while now with no problems. Having just upgraded to VS2005 Beta 2 and SQL Server 2005 April CTP I now get this when I make a call to the ENDPOINT web service:

"The request failed with HTTP status 505: HTTP Version not supported."

Also, if I try to access the WSDL just through Internet Explorer by putting "?wsdl" on the end of the url, I get:

"Error 501/505 - Not implemented or not supported"

It all worked fine in previous beta's. Any idea if some new features\security are causing me this problem, or does it just not work anymore ?

Thanks for any help.
Colin.

Hmm, this works fine for me. If you go into the SQL Server Surface Area Configuration tool and check the Native Web services feature; do you see your endpoints then? If you do, make sure they are started.

Also, what OS, do you run under; XP or Win2K3? If you run under XP make sure that the native httplistener is started; "net start httplistener".

Niels

|||Hi Niels

Thankyou for your reply.

I've just looked under Native Web Services and both
endpoints I use are Started. I'm using Win2k3. Still
having no luck with it unfortunately. Did you unistall
any previous beta's before you installed your current
version? I did, so I'm just wondering if thats were my
problem is.

Colin.
|||Hi Colin

We recently (since the last beta, but a few IDW's ago) dropped support for HTTP 1.0. In other words we reject a request from a HTTP 1.0 client. We now require that the incoming request be a HTTP 1.1 request. This was done because we were sending back chunked responses and chunking isn’t allowed in HTTP/1.0. Is it possible to upgrade your client (IE) to use the newer HTTP protocol. When we made the decision we didn't anticipate that there would be too many applications out there using HTTP 1.0.

Srik|||Hello Colin,
Is it possible to get a netmon sniff of the client/server communication to confirm that this is indeed a HTTP/1.0 request that SQL Server is rejecting?

Thanks,
Anu|||Thanks for your help guys.

Discovered that the endpoint does work properly when using machine names in the url (so thats request are limited to within our network). But when I put a internet domain name in, I get this 'not supported' problem.

It turns out that you are correct in that the sql server is recieving a HTTP 1.0 request, which I found quite odd, because my client is xp with all the latest patches and updates.

Anyway, after further thought I guessed our firewall could be doing something (it is Microsoft ;) ) and it turns out that is where the problem is. As detailed in the TechNet article :

http://www.microsoft.com/technet/prodtechnol/isa/2000/plan/isahttp.mspx

Microsoft ISA Server will forward 1.1 requests as 1.0

Well done Microsoft. Seems you have to pay for the new Micro$oft ISA 2004 version that get the desired functionality.

Colin.|||

Hi

I have one similar trouble, I made one webservice with sql server 2005 create endpoint. In my office that we have a domain controller I needed to start my sql server in several machines with the same domain user and all works fine. In customer office they don't have a domain, the machines are connected in a Lan but there are not common users within the machines. When I try to invoke the webservice It always shows LOGIN ERROR. I make one user in both machines (same name, same password) and I cannot access web service.

Some help ?

ENDPOINTs in SQL Server 2005 April CTP

Hi

I've been happily using VS2005 Beta 1 to access ENDPOINTs on SQL Server 2005 Beta 2 for a while now with no problems. Having just upgraded to VS2005 Beta 2 and SQL Server 2005 April CTP I now get this when I make a call to the ENDPOINT web service:

"The request failed with HTTP status 505: HTTP Version not supported."

Also, if I try to access the WSDL just through Internet Explorer by putting "?wsdl" on the end of the url, I get:

"Error 501/505 - Not implemented or not supported"

It all worked fine in previous beta's. Any idea if some new features\security are causing me this problem, or does it just not work anymore ?

Thanks for any help.
Colin.

Hmm, this works fine for me. If you go into the SQL Server Surface Area Configuration tool and check the Native Web services feature; do you see your endpoints then? If you do, make sure they are started.

Also, what OS, do you run under; XP or Win2K3? If you run under XP make sure that the native httplistener is started; "net start httplistener".

Niels

|||Hi Niels

Thankyou for your reply.

I've just looked under Native Web Services and both
endpoints I use are Started. I'm using Win2k3. Still
having no luck with it unfortunately. Did you unistall
any previous beta's before you installed your current
version? I did, so I'm just wondering if thats were my
problem is.

Colin.|||Hi Colin

We recently (since the last beta, but a few IDW's ago) dropped support for HTTP 1.0. In other words we reject a request from a HTTP 1.0 client. We now require that the incoming request be a HTTP 1.1 request. This was done because we were sending back chunked responses and chunking isn’t allowed in HTTP/1.0. Is it possible to upgrade your client (IE) to use the newer HTTP protocol. When we made the decision we didn't anticipate that there would be too many applications out there using HTTP 1.0.

Srik|||Hello Colin,
Is it possible to get a netmon sniff of the client/server communication to confirm that this is indeed a HTTP/1.0 request that SQL Server is rejecting?

Thanks,
Anu|||Thanks for your help guys.

Discovered that the endpoint does work properly when using machine names in the url (so thats request are limited to within our network). But when I put a internet domain name in, I get this 'not supported' problem.

It turns out that you are correct in that the sql server is recieving a HTTP 1.0 request, which I found quite odd, because my client is xp with all the latest patches and updates.

Anyway, after further thought I guessed our firewall could be doing something (it is Microsoft ;) ) and it turns out that is where the problem is. As detailed in the TechNet article :

http://www.microsoft.com/technet/prodtechnol/isa/2000/plan/isahttp.mspx

Microsoft ISA Server will forward 1.1 requests as 1.0

Well done Microsoft. Seems you have to pay for the new Micro$oft ISA 2004 version that get the desired functionality.

Colin.|||

Hi

I have one similar trouble, I made one webservice with sql server 2005 create endpoint. In my office that we have a domain controller I needed to start my sql server in several machines with the same domain user and all works fine. In customer office they don't have a domain, the machines are connected in a Lan but there are not common users within the machines. When I try to invoke the webservice It always shows LOGIN ERROR. I make one user in both machines (same name, same password) and I cannot access web service.

Some help ?

ENDPOINTs in SQL Server 2005 April CTP

Hi

I've been happily using VS2005 Beta 1 to access ENDPOINTs on SQL Server 2005 Beta 2 for a while now with no problems. Having just upgraded to VS2005 Beta 2 and SQL Server 2005 April CTP I now get this when I make a call to the ENDPOINT web service:

"The request failed with HTTP status 505: HTTP Version not supported."

Also, if I try to access the WSDL just through Internet Explorer by putting "?wsdl" on the end of the url, I get:

"Error 501/505 - Not implemented or not supported"

It all worked fine in previous beta's. Any idea if some new features\security are causing me this problem, or does it just not work anymore ?

Thanks for any help.
Colin.

Hmm, this works fine for me. If you go into the SQL Server Surface Area Configuration tool and check the Native Web services feature; do you see your endpoints then? If you do, make sure they are started.

Also, what OS, do you run under; XP or Win2K3? If you run under XP make sure that the native httplistener is started; "net start httplistener".

Niels

|||Hi Niels

Thankyou for your reply.

I've just looked under Native Web Services and both
endpoints I use are Started. I'm using Win2k3. Still
having no luck with it unfortunately. Did you unistall
any previous beta's before you installed your current
version? I did, so I'm just wondering if thats were my
problem is.

Colin.|||Hi Colin

We recently (since the last beta, but a few IDW's ago) dropped support for HTTP 1.0. In other words we reject a request from a HTTP 1.0 client. We now require that the incoming request be a HTTP 1.1 request. This was done because we were sending back chunked responses and chunking isn’t allowed in HTTP/1.0. Is it possible to upgrade your client (IE) to use the newer HTTP protocol. When we made the decision we didn't anticipate that there would be too many applications out there using HTTP 1.0.

Srik|||Hello Colin,
Is it possible to get a netmon sniff of the client/server communication to confirm that this is indeed a HTTP/1.0 request that SQL Server is rejecting?

Thanks,
Anu|||Thanks for your help guys.

Discovered that the endpoint does work properly when using machine names in the url (so thats request are limited to within our network). But when I put a internet domain name in, I get this 'not supported' problem.

It turns out that you are correct in that the sql server is recieving a HTTP 1.0 request, which I found quite odd, because my client is xp with all the latest patches and updates.

Anyway, after further thought I guessed our firewall could be doing something (it is Microsoft ;) ) and it turns out that is where the problem is. As detailed in the TechNet article :

http://www.microsoft.com/technet/prodtechnol/isa/2000/plan/isahttp.mspx

Microsoft ISA Server will forward 1.1 requests as 1.0

Well done Microsoft. Seems you have to pay for the new Micro$oft ISA 2004 version that get the desired functionality.

Colin.

Monday, March 26, 2012

encryption of column in sqlserver 2000

Hi
I have some sensitive info like Credit card number in some of my
database tables. Is there a way to encrypt these credit card information in
database. I am running SQL server 2000 standard edition
Thanks
MangeshMangesh Deshpande wrote:
> Hi
> I have some sensitive info like Credit card number in some of
> my database tables. Is there a way to encrypt these credit card
> information in database. I am running SQL server 2000 standard edition
> Thanks
> Mangesh
No native support for encryption in the database. You can
encrypt/decrypt in the application if you need to. There are a number of
freely available encryption algorithms you can use. Or you could try
securing the database to keep prying eyes away. What are you mostly
concerned about?
David Gugick
Imceda Software
www.imceda.com|||Thanks
I am concerned about the credit card information in the database.
What is the best and most widely encryption tools used with SQL server 2000.
Mangesh
"David Gugick" wrote:

> Mangesh Deshpande wrote:
> No native support for encryption in the database. You can
> encrypt/decrypt in the application if you need to. There are a number of
> freely available encryption algorithms you can use. Or you could try
> securing the database to keep prying eyes away. What are you mostly
> concerned about?
> --
> David Gugick
> Imceda Software
> www.imceda.com
>|||Mangesh Deshpande wrote:
> Thanks
> I am concerned about the credit card information in the database.
> What is the best and most widely encryption tools used with SQL
> server 2000.
>
Yes, but why are you concerned. Who is going to be able to access this
data? How are they going to access it? Do you have security in place to
prevent unauthorized access to the underlying tables? How does one
currently access the CC information?
You could remove all access to the table and provide a view with a
computed column that just returns the last 4 digits of the CC number for
display: For example:
Create Table order_info3453453 (
cust_name varchar(25) not null,
cc_nbr varchar(16) not null)
DENY ALL ON order_info3453453 TO public
insert into order_info3453453 values ('David', '234234234234234')
insert into order_info3453453 values ('Jeff', '3784662345444441')
Create View order_info_view4324234
as
Select cust_name, cc_nbr = RIGHT(cc_nbr, 4)
From order_info3453453
grant select on order_info_view4324234 to public
select * from order_info_view4324234
cust_name cc_nbr
-- --
David 4234
David 4234
Jeff 4441
I saw your other post regarding backups, so maybe that's what you're
after.
--
David Gugick
Imceda Software
www.imceda.com|||Thanks David
I am primarily a oracle DBA and we used oracle supplied encryption which
allows the CC info to be accessed from certain IPs. The users are all
protected but management is worried in case the userinfo is disclosed and
unauthorized user try to access the DB he should be blocked from seeing CC
and password info in database by IP and first and second by encryption.
Management kind of like that idea, but now they want me to do the same thing
in sqlserver and was looking for various possible scenarious.
I found xp_encrypt and tested it but I am not sure if there any good tools
avaiable.
"David Gugick" wrote:

> Mangesh Deshpande wrote:
> Yes, but why are you concerned. Who is going to be able to access this
> data? How are they going to access it? Do you have security in place to
> prevent unauthorized access to the underlying tables? How does one
> currently access the CC information?
> You could remove all access to the table and provide a view with a
> computed column that just returns the last 4 digits of the CC number for
> display: For example:
> Create Table order_info3453453 (
> cust_name varchar(25) not null,
> cc_nbr varchar(16) not null)
> DENY ALL ON order_info3453453 TO public
> insert into order_info3453453 values ('David', '234234234234234')
> insert into order_info3453453 values ('Jeff', '3784662345444441')
> Create View order_info_view4324234
> as
> Select cust_name, cc_nbr = RIGHT(cc_nbr, 4)
> From order_info3453453
> grant select on order_info_view4324234 to public
> select * from order_info_view4324234
> cust_name cc_nbr
> -- --
> David 4234
> David 4234
> Jeff 4441
>
> I saw your other post regarding backups, so maybe that's what you're
> after.
> --
> David Gugick
> Imceda Software
> www.imceda.com
>|||Mangesh Deshpande wrote:
> Thanks David
> I am primarily a oracle DBA and we used oracle supplied encryption
> which allows the CC info to be accessed from certain IPs. The users
> are all protected but management is worried in case the userinfo is
> disclosed and unauthorized user try to access the DB he should be
> blocked from seeing CC and password info in database by IP and first
> and second by encryption.
> Management kind of like that idea, but now they want me to do the
> same thing in sqlserver and was looking for various possible
> scenarious.
> I found xp_encrypt and tested it but I am not sure if there any good
> tools avaiable.
Maybe this will help:
http://www.windowsitpro.com/Article...44550.html?Ad=1
Go a google search for ["SQL Server" Encryption] and see what comes up.
There are third-party solutions available for data encryption. You could
also manage this from the app or middle-tier if necessary.
Regarding your comments about unauthorized access:
You can revoke access to the underlying tables using built-in security.
You can grant access to the data to the necessary groups/individuals
using specific grants on stored procedures. If no one has SELECT rights
on the table and cannot execute a procedure that selects the data, then
no one can see it. If you use the computed column technique as I
mentioned in my last post, you can show end users the last 4 digits of
the cc number so those in the company that require access to information
for contact with customers can have enough to go on (obviously they
could also have access to the type of card).
So how would an unauthorized user access the database? Well, first they
would need a valid login and would also need to have sufficient rights
to execute the necessary procedures to return the customer information.
You can probably limit access to the server by IP / MAC Address using
your firewall.
For backups, you do need encryption. And as I mentioned I would check
out LiteSpeed (free trial on web site).
David Gugick
Imceda Software
www.imceda.com|||The encryption/decryption of the data is better handled in the presentation
layer tier of the system. There, you can use the native Crypto32 API calls.
If you are using any of the .NET compilers, these are exposed through
System.Security calls.
Sincerely,
Anthony Thomas
"David Gugick" <davidg-nospam@.imceda.com> wrote in message
news:eRbgYT8HFHA.3332@.TK2MSFTNGP15.phx.gbl...
Mangesh Deshpande wrote:
> Thanks
> I am concerned about the credit card information in the database.
> What is the best and most widely encryption tools used with SQL
> server 2000.
>
Yes, but why are you concerned. Who is going to be able to access this
data? How are they going to access it? Do you have security in place to
prevent unauthorized access to the underlying tables? How does one
currently access the CC information?
You could remove all access to the table and provide a view with a
computed column that just returns the last 4 digits of the CC number for
display: For example:
Create Table order_info3453453 (
cust_name varchar(25) not null,
cc_nbr varchar(16) not null)
DENY ALL ON order_info3453453 TO public
insert into order_info3453453 values ('David', '234234234234234')
insert into order_info3453453 values ('Jeff', '3784662345444441')
Create View order_info_view4324234
as
Select cust_name, cc_nbr = RIGHT(cc_nbr, 4)
From order_info3453453
grant select on order_info_view4324234 to public
select * from order_info_view4324234
cust_name cc_nbr
-- --
David 4234
David 4234
Jeff 4441
I saw your other post regarding backups, so maybe that's what you're
after.
--
David Gugick
Imceda Software
www.imceda.com|||I personally like this idea, but cannot imagine how to do that. Any clues ?
Regards,
Daniel
"Anthony Thomas" <ALThomas@.kc.rr.com> wrote in message
news:ek0t#UwKFHA.1620@.TK2MSFTNGP14.phx.gbl...
> The encryption/decryption of the data is better handled in the
presentation
> layer tier of the system. There, you can use the native Crypto32 API
calls.
> If you are using any of the .NET compilers, these are exposed through
> System.Security calls.
> Sincerely,
>
> Anthony Thomas
>
> --
> "David Gugick" <davidg-nospam@.imceda.com> wrote in message
> news:eRbgYT8HFHA.3332@.TK2MSFTNGP15.phx.gbl...
> Mangesh Deshpande wrote:
> Yes, but why are you concerned. Who is going to be able to access this
> data? How are they going to access it? Do you have security in place to
> prevent unauthorized access to the underlying tables? How does one
> currently access the CC information?
> You could remove all access to the table and provide a view with a
> computed column that just returns the last 4 digits of the CC number for
> display: For example:
> Create Table order_info3453453 (
> cust_name varchar(25) not null,
> cc_nbr varchar(16) not null)
> DENY ALL ON order_info3453453 TO public
> insert into order_info3453453 values ('David', '234234234234234')
> insert into order_info3453453 values ('Jeff', '3784662345444441')
> Create View order_info_view4324234
> as
> Select cust_name, cc_nbr = RIGHT(cc_nbr, 4)
> From order_info3453453
> grant select on order_info_view4324234 to public
> select * from order_info_view4324234
> cust_name cc_nbr
> -- --
> David 4234
> David 4234
> Jeff 4441
>
> I saw your other post regarding backups, so maybe that's what you're
> after.
> --
> David Gugick
> Imceda Software
> www.imceda.com
>|||I just told you: the .NET framework exposes the Crypto API through the
System.Security objects.
Sincerely,
Anthony Thomas
"Daniel Joskovski" <omnis@.NOSPAMunetREMOVECAPS.com.mk> wrote in message
news:OaaZaK0LFHA.1144@.TK2MSFTNGP09.phx.gbl...
I personally like this idea, but cannot imagine how to do that. Any clues ?
Regards,
Daniel
"Anthony Thomas" <ALThomas@.kc.rr.com> wrote in message
news:ek0t#UwKFHA.1620@.TK2MSFTNGP14.phx.gbl...
> The encryption/decryption of the data is better handled in the
presentation
> layer tier of the system. There, you can use the native Crypto32 API
calls.
> If you are using any of the .NET compilers, these are exposed through
> System.Security calls.
> Sincerely,
>
> Anthony Thomas
>
> --
> "David Gugick" <davidg-nospam@.imceda.com> wrote in message
> news:eRbgYT8HFHA.3332@.TK2MSFTNGP15.phx.gbl...
> Mangesh Deshpande wrote:
> Yes, but why are you concerned. Who is going to be able to access this
> data? How are they going to access it? Do you have security in place to
> prevent unauthorized access to the underlying tables? How does one
> currently access the CC information?
> You could remove all access to the table and provide a view with a
> computed column that just returns the last 4 digits of the CC number for
> display: For example:
> Create Table order_info3453453 (
> cust_name varchar(25) not null,
> cc_nbr varchar(16) not null)
> DENY ALL ON order_info3453453 TO public
> insert into order_info3453453 values ('David', '234234234234234')
> insert into order_info3453453 values ('Jeff', '3784662345444441')
> Create View order_info_view4324234
> as
> Select cust_name, cc_nbr = RIGHT(cc_nbr, 4)
> From order_info3453453
> grant select on order_info_view4324234 to public
> select * from order_info_view4324234
> cust_name cc_nbr
> -- --
> David 4234
> David 4234
> Jeff 4441
>
> I saw your other post regarding backups, so maybe that's what you're
> after.
> --
> David Gugick
> Imceda Software
> www.imceda.com
>|||Hi Antony,
O.K. I will try to restate the question.
For example I am using datagrid bound to table in some dataset.
Data table is populated when form is loaded, and data source is updated when
I close the form.
So I can use some logic to decrypt data from table when the form is going to
load and I can encrypt data again before the form is closed. Do you suggest
writing special dataadapter class, or might be some different approach ?
Regards,
Daniel
"Anthony Thomas" <ALThomas@.kc.rr.com> wrote in message
news:uAyhm73LFHA.2420@.TK2MSFTNGP12.phx.gbl...
> I just told you: the .NET framework exposes the Crypto API through the
> System.Security objects.
> Sincerely,
>
> Anthony Thomas
>
> --
> "Daniel Joskovski" <omnis@.NOSPAMunetREMOVECAPS.com.mk> wrote in message
> news:OaaZaK0LFHA.1144@.TK2MSFTNGP09.phx.gbl...
> I personally like this idea, but cannot imagine how to do that. Any clues
?
> Regards,
> Daniel
> "Anthony Thomas" <ALThomas@.kc.rr.com> wrote in message
> news:ek0t#UwKFHA.1620@.TK2MSFTNGP14.phx.gbl...
> presentation
> calls.
>

encryption of column in sqlserver 2000

Hi
I have some sensitive info like Credit card number in some of my
database tables. Is there a way to encrypt these credit card information in
database. I am running SQL server 2000 standard edition
Thanks
Mangesh
Mangesh Deshpande wrote:
> Hi
> I have some sensitive info like Credit card number in some of
> my database tables. Is there a way to encrypt these credit card
> information in database. I am running SQL server 2000 standard edition
> Thanks
> Mangesh
No native support for encryption in the database. You can
encrypt/decrypt in the application if you need to. There are a number of
freely available encryption algorithms you can use. Or you could try
securing the database to keep prying eyes away. What are you mostly
concerned about?
David Gugick
Imceda Software
www.imceda.com
|||Thanks
I am concerned about the credit card information in the database.
What is the best and most widely encryption tools used with SQL server 2000.
Mangesh
"David Gugick" wrote:

> Mangesh Deshpande wrote:
> No native support for encryption in the database. You can
> encrypt/decrypt in the application if you need to. There are a number of
> freely available encryption algorithms you can use. Or you could try
> securing the database to keep prying eyes away. What are you mostly
> concerned about?
> --
> David Gugick
> Imceda Software
> www.imceda.com
>
|||Mangesh Deshpande wrote:
> Thanks
> I am concerned about the credit card information in the database.
> What is the best and most widely encryption tools used with SQL
> server 2000.
>
Yes, but why are you concerned. Who is going to be able to access this
data? How are they going to access it? Do you have security in place to
prevent unauthorized access to the underlying tables? How does one
currently access the CC information?
You could remove all access to the table and provide a view with a
computed column that just returns the last 4 digits of the CC number for
display: For example:
Create Table order_info3453453 (
cust_name varchar(25) not null,
cc_nbr varchar(16) not null)
DENY ALL ON order_info3453453 TO public
insert into order_info3453453 values ('David', '234234234234234')
insert into order_info3453453 values ('Jeff', '3784662345444441')
Create View order_info_view4324234
as
Select cust_name, cc_nbr = RIGHT(cc_nbr, 4)
From order_info3453453
grant select on order_info_view4324234 to public
select * from order_info_view4324234
cust_name cc_nbr
-- --
David 4234
David 4234
Jeff 4441
I saw your other post regarding backups, so maybe that's what you're
after.
David Gugick
Imceda Software
www.imceda.com
|||Thanks David
I am primarily a oracle DBA and we used oracle supplied encryption which
allows the CC info to be accessed from certain IPs. The users are all
protected but management is worried in case the userinfo is disclosed and
unauthorized user try to access the DB he should be blocked from seeing CC
and password info in database by IP and first and second by encryption.
Management kind of like that idea, but now they want me to do the same thing
in sqlserver and was looking for various possible scenarious.
I found xp_encrypt and tested it but I am not sure if there any good tools
avaiable.
"David Gugick" wrote:

> Mangesh Deshpande wrote:
> Yes, but why are you concerned. Who is going to be able to access this
> data? How are they going to access it? Do you have security in place to
> prevent unauthorized access to the underlying tables? How does one
> currently access the CC information?
> You could remove all access to the table and provide a view with a
> computed column that just returns the last 4 digits of the CC number for
> display: For example:
> Create Table order_info3453453 (
> cust_name varchar(25) not null,
> cc_nbr varchar(16) not null)
> DENY ALL ON order_info3453453 TO public
> insert into order_info3453453 values ('David', '234234234234234')
> insert into order_info3453453 values ('Jeff', '3784662345444441')
> Create View order_info_view4324234
> as
> Select cust_name, cc_nbr = RIGHT(cc_nbr, 4)
> From order_info3453453
> grant select on order_info_view4324234 to public
> select * from order_info_view4324234
> cust_name cc_nbr
> -- --
> David 4234
> David 4234
> Jeff 4441
>
> I saw your other post regarding backups, so maybe that's what you're
> after.
> --
> David Gugick
> Imceda Software
> www.imceda.com
>
|||Mangesh Deshpande wrote:
> Thanks David
> I am primarily a oracle DBA and we used oracle supplied encryption
> which allows the CC info to be accessed from certain IPs. The users
> are all protected but management is worried in case the userinfo is
> disclosed and unauthorized user try to access the DB he should be
> blocked from seeing CC and password info in database by IP and first
> and second by encryption.
> Management kind of like that idea, but now they want me to do the
> same thing in sqlserver and was looking for various possible
> scenarious.
> I found xp_encrypt and tested it but I am not sure if there any good
> tools avaiable.
Maybe this will help:
http://www.windowsitpro.com/Article/...4550.html?Ad=1
Go a google search for ["SQL Server" Encryption] and see what comes up.
There are third-party solutions available for data encryption. You could
also manage this from the app or middle-tier if necessary.
Regarding your comments about unauthorized access:
You can revoke access to the underlying tables using built-in security.
You can grant access to the data to the necessary groups/individuals
using specific grants on stored procedures. If no one has SELECT rights
on the table and cannot execute a procedure that selects the data, then
no one can see it. If you use the computed column technique as I
mentioned in my last post, you can show end users the last 4 digits of
the cc number so those in the company that require access to information
for contact with customers can have enough to go on (obviously they
could also have access to the type of card).
So how would an unauthorized user access the database? Well, first they
would need a valid login and would also need to have sufficient rights
to execute the necessary procedures to return the customer information.
You can probably limit access to the server by IP / MAC Address using
your firewall.
For backups, you do need encryption. And as I mentioned I would check
out LiteSpeed (free trial on web site).
David Gugick
Imceda Software
www.imceda.com
|||The encryption/decryption of the data is better handled in the presentation
layer tier of the system. There, you can use the native Crypto32 API calls.
If you are using any of the .NET compilers, these are exposed through
System.Security calls.
Sincerely,
Anthony Thomas

"David Gugick" <davidg-nospam@.imceda.com> wrote in message
news:eRbgYT8HFHA.3332@.TK2MSFTNGP15.phx.gbl...
Mangesh Deshpande wrote:
> Thanks
> I am concerned about the credit card information in the database.
> What is the best and most widely encryption tools used with SQL
> server 2000.
>
Yes, but why are you concerned. Who is going to be able to access this
data? How are they going to access it? Do you have security in place to
prevent unauthorized access to the underlying tables? How does one
currently access the CC information?
You could remove all access to the table and provide a view with a
computed column that just returns the last 4 digits of the CC number for
display: For example:
Create Table order_info3453453 (
cust_name varchar(25) not null,
cc_nbr varchar(16) not null)
DENY ALL ON order_info3453453 TO public
insert into order_info3453453 values ('David', '234234234234234')
insert into order_info3453453 values ('Jeff', '3784662345444441')
Create View order_info_view4324234
as
Select cust_name, cc_nbr = RIGHT(cc_nbr, 4)
From order_info3453453
grant select on order_info_view4324234 to public
select * from order_info_view4324234
cust_name cc_nbr
-- --
David 4234
David 4234
Jeff 4441
I saw your other post regarding backups, so maybe that's what you're
after.
David Gugick
Imceda Software
www.imceda.com
|||I personally like this idea, but cannot imagine how to do that. Any clues ?
Regards,
Daniel
"Anthony Thomas" <ALThomas@.kc.rr.com> wrote in message
news:ek0t#UwKFHA.1620@.TK2MSFTNGP14.phx.gbl...
> The encryption/decryption of the data is better handled in the
presentation
> layer tier of the system. There, you can use the native Crypto32 API
calls.
> If you are using any of the .NET compilers, these are exposed through
> System.Security calls.
> Sincerely,
>
> Anthony Thomas
>
> --
> "David Gugick" <davidg-nospam@.imceda.com> wrote in message
> news:eRbgYT8HFHA.3332@.TK2MSFTNGP15.phx.gbl...
> Mangesh Deshpande wrote:
> Yes, but why are you concerned. Who is going to be able to access this
> data? How are they going to access it? Do you have security in place to
> prevent unauthorized access to the underlying tables? How does one
> currently access the CC information?
> You could remove all access to the table and provide a view with a
> computed column that just returns the last 4 digits of the CC number for
> display: For example:
> Create Table order_info3453453 (
> cust_name varchar(25) not null,
> cc_nbr varchar(16) not null)
> DENY ALL ON order_info3453453 TO public
> insert into order_info3453453 values ('David', '234234234234234')
> insert into order_info3453453 values ('Jeff', '3784662345444441')
> Create View order_info_view4324234
> as
> Select cust_name, cc_nbr = RIGHT(cc_nbr, 4)
> From order_info3453453
> grant select on order_info_view4324234 to public
> select * from order_info_view4324234
> cust_name cc_nbr
> -- --
> David 4234
> David 4234
> Jeff 4441
>
> I saw your other post regarding backups, so maybe that's what you're
> after.
> --
> David Gugick
> Imceda Software
> www.imceda.com
>
|||I just told you: the .NET framework exposes the Crypto API through the
System.Security objects.
Sincerely,
Anthony Thomas

"Daniel Joskovski" <omnis@.NOSPAMunetREMOVECAPS.com.mk> wrote in message
news:OaaZaK0LFHA.1144@.TK2MSFTNGP09.phx.gbl...
I personally like this idea, but cannot imagine how to do that. Any clues ?
Regards,
Daniel
"Anthony Thomas" <ALThomas@.kc.rr.com> wrote in message
news:ek0t#UwKFHA.1620@.TK2MSFTNGP14.phx.gbl...
> The encryption/decryption of the data is better handled in the
presentation
> layer tier of the system. There, you can use the native Crypto32 API
calls.
> If you are using any of the .NET compilers, these are exposed through
> System.Security calls.
> Sincerely,
>
> Anthony Thomas
>
> --
> "David Gugick" <davidg-nospam@.imceda.com> wrote in message
> news:eRbgYT8HFHA.3332@.TK2MSFTNGP15.phx.gbl...
> Mangesh Deshpande wrote:
> Yes, but why are you concerned. Who is going to be able to access this
> data? How are they going to access it? Do you have security in place to
> prevent unauthorized access to the underlying tables? How does one
> currently access the CC information?
> You could remove all access to the table and provide a view with a
> computed column that just returns the last 4 digits of the CC number for
> display: For example:
> Create Table order_info3453453 (
> cust_name varchar(25) not null,
> cc_nbr varchar(16) not null)
> DENY ALL ON order_info3453453 TO public
> insert into order_info3453453 values ('David', '234234234234234')
> insert into order_info3453453 values ('Jeff', '3784662345444441')
> Create View order_info_view4324234
> as
> Select cust_name, cc_nbr = RIGHT(cc_nbr, 4)
> From order_info3453453
> grant select on order_info_view4324234 to public
> select * from order_info_view4324234
> cust_name cc_nbr
> -- --
> David 4234
> David 4234
> Jeff 4441
>
> I saw your other post regarding backups, so maybe that's what you're
> after.
> --
> David Gugick
> Imceda Software
> www.imceda.com
>
|||Hi Antony,
O.K. I will try to restate the question.
For example I am using datagrid bound to table in some dataset.
Data table is populated when form is loaded, and data source is updated when
I close the form.
So I can use some logic to decrypt data from table when the form is going to
load and I can encrypt data again before the form is closed. Do you suggest
writing special dataadapter class, or might be some different approach ?
Regards,
Daniel
"Anthony Thomas" <ALThomas@.kc.rr.com> wrote in message
news:uAyhm73LFHA.2420@.TK2MSFTNGP12.phx.gbl...
> I just told you: the .NET framework exposes the Crypto API through the
> System.Security objects.
> Sincerely,
>
> Anthony Thomas
>
> --
> "Daniel Joskovski" <omnis@.NOSPAMunetREMOVECAPS.com.mk> wrote in message
> news:OaaZaK0LFHA.1144@.TK2MSFTNGP09.phx.gbl...
> I personally like this idea, but cannot imagine how to do that. Any clues
?
> Regards,
> Daniel
> "Anthony Thomas" <ALThomas@.kc.rr.com> wrote in message
> news:ek0t#UwKFHA.1620@.TK2MSFTNGP14.phx.gbl...
> presentation
> calls.
>
sql

Wednesday, March 21, 2012

Encrypting datafile and backups

Hi
I read good article on EFS
http://www.sqlservercentral.com/colu...enting_efs.asp
How do you do encryption of backups
I found that the backup made from a encrypted data if recovered are not
encrypted as it is OS encrytion and not database.
Also another interesting question on same line:
If the domain account used to start the sqlserver and encryption is deleted
and recreated with the same login/password will that user be able to decrypt
the files.
Mangesh
Also if we can recover the encrypted file
Mangesh Deshpande wrote:
> Hi
> I read good article on EFS
> http://www.sqlservercentral.com/colu...enting_efs.asp
> How do you do encryption of backups
> I found that the backup made from a encrypted data if recovered are
> not encrypted as it is OS encrytion and not database.
> Also another interesting question on same line:
> If the domain account used to start the sqlserver and encryption is
> deleted and recreated with the same login/password will that user be
> able to decrypt the files.
> Mangesh
> Also if we can recover the encrypted file
You can use a tool like LiteSpeed to encrypt your backups (url below).
David Gugick
Imceda Software
www.imceda.com
|||Hi
EFS is based on Certificates. You loose the Certificate, you loose access to
your data. If you re-set a user's pasword, his certificates become invalid.
Regards
Mike
"Mangesh Deshpande" wrote:

> Hi
> I read good article on EFS
> http://www.sqlservercentral.com/colu...enting_efs.asp
> How do you do encryption of backups
> I found that the backup made from a encrypted data if recovered are not
> encrypted as it is OS encrytion and not database.
> Also another interesting question on same line:
> If the domain account used to start the sqlserver and encryption is deleted
> and recreated with the same login/password will that user be able to decrypt
> the files.
> Mangesh
> Also if we can recover the encrypted file
sql

Encrypting datafile and backups

Hi
I read good article on EFS
http://www.sqlservercentral.com/col...menting_efs.asp
How do you do encryption of backups
I found that the backup made from a encrypted data if recovered are not
encrypted as it is OS encrytion and not database.
Also another interesting question on same line:
If the domain account used to start the sqlserver and encryption is deleted
and recreated with the same login/password will that user be able to decrypt
the files.
Mangesh
Also if we can recover the encrypted fileMangesh Deshpande wrote:
> Hi
> I read good article on EFS
> http://www.sqlservercentral.com/col...menting_efs.asp
> How do you do encryption of backups
> I found that the backup made from a encrypted data if recovered are
> not encrypted as it is OS encrytion and not database.
> Also another interesting question on same line:
> If the domain account used to start the sqlserver and encryption is
> deleted and recreated with the same login/password will that user be
> able to decrypt the files.
> Mangesh
> Also if we can recover the encrypted file
You can use a tool like LiteSpeed to encrypt your backups (url below).
David Gugick
Imceda Software
www.imceda.com|||Hi
EFS is based on Certificates. You loose the Certificate, you loose access to
your data. If you re-set a user's pasword, his certificates become invalid.
Regards
Mike
"Mangesh Deshpande" wrote:

> Hi
> I read good article on EFS
> http://www.sqlservercentral.com/col...menting_efs.asp
> How do you do encryption of backups
> I found that the backup made from a encrypted data if recovered are not
> encrypted as it is OS encrytion and not database.
> Also another interesting question on same line:
> If the domain account used to start the sqlserver and encryption is delete
d
> and recreated with the same login/password will that user be able to decry
pt
> the files.
> Mangesh
> Also if we can recover the encrypted file

Monday, March 19, 2012

Encrypted values are different although the source is the same

Hi
I am in the process of evaluating the SQL 2005 data encryption. I have
noticed something very strange and am hoping that someone would be able to
clear it up for me.
I have a table that holds credit card numbers. It is possible that there
will be two or more rows with the same credit card number.
I have created my master key:
create master key encryption by password =
'***************************************
********'
GO
I have created my certificate:
create certificate cert_sk_admin with subject = 'Certificate for accessing
symmetric keys';
GO
I have created my symmetric key:
create symmetric key sk_CreditCard with algorithm = aes_128 encryption by
certificate cert_sk_admin;
GO
I have added a new column to the credit card table (lets call it CCNO_Enc.
The existing column is CCNO).
I then update the new column:
UPDATE CREDITCARD
SET CCNO_Enc = EncryptByKey(Key_GUID('sk_CreditCard'), CCNO)
GO
The Problem:
If I now query this table (select CCNO, CCNO_Enc FROM CREDITCARD WHERE CCNO
= '123456789'), I get two records back (there are two rows with this CCNO).
The problem is that the encrypted value of the two rows is different? How
can this be if the source value is the same. Our problem with this is that
searching on an encrypted column is very slow(when performing the
decryption). We would like to encrypt the search criteria and use that to do
a direct comparison on the encrypted field (without decrypting it in the
where clause)
The Questions:
1. Why are the encrypted values different?
2. Can this be changed?
Thanks
CraigCB wrote:
> Hi
> I am in the process of evaluating the SQL 2005 data encryption. I have
> noticed something very strange and am hoping that someone would be
> able to clear it up for me.
> I have a table that holds credit card numbers. It is possible that
> there will be two or more rows with the same credit card number.
> I have created my master key:
> create master key encryption by password =
> '***************************************
********'
> GO
> I have created my certificate:
> create certificate cert_sk_admin with subject = 'Certificate for
> accessing symmetric keys';
> GO
> I have created my symmetric key:
> create symmetric key sk_CreditCard with algorithm = aes_128
> encryption by certificate cert_sk_admin;
> GO
>
> I have added a new column to the credit card table (lets call it
> CCNO_Enc. The existing column is CCNO).
> I then update the new column:
> UPDATE CREDITCARD
> SET CCNO_Enc = EncryptByKey(Key_GUID('sk_CreditCard'), CCNO)
> GO
> The Problem:
> If I now query this table (select CCNO, CCNO_Enc FROM CREDITCARD
> WHERE CCNO = '123456789'), I get two records back (there are two rows
> with this CCNO). The problem is that the encrypted value of the two
> rows is different? How can this be if the source value is the same.
> Our problem with this is that searching on an encrypted column is
> very slow(when performing the decryption). We would like to encrypt
> the search criteria and use that to do a direct comparison on the
> encrypted field (without decrypting it in the where clause)
> The Questions:
> 1. Why are the encrypted values different?
> 2. Can this be changed?
> Thanks
> Craig
This has to do with AES (Rijndael) encryption and its use of something
called cipher-block chaining. That is, the same plain text can be
encrypted with the same key, producing different cipher text. This
thread might explain it in more detail than I can provide:
http://www.eggheadcafe.com/ng/micro...]www.imceda.com
www.quest.com|||The encryption is salted. There are several severe problems with encryption
if that is not done.
Therefore you cannot use encrypted values as index. You could use hash of
the clear text value as index, but that would also weaken you security
(again, if one knows a clear text value, it can find the corresponding
encrypted value in the database).
For a description of salting (or initialization vector) see
http://www.rsasecurity.com/rsalabs/node.asp?id=2171
HTH,
~ Remus
"CB" <craig.bryden@.derivco.com> wrote in message
news:%231gDknM9FHA.4076@.tk2msftngp13.phx.gbl...
> Hi
> I am in the process of evaluating the SQL 2005 data encryption. I have
> noticed something very strange and am hoping that someone would be able to
> clear it up for me.
> I have a table that holds credit card numbers. It is possible that there
> will be two or more rows with the same credit card number.
> I have created my master key:
> create master key encryption by password =
> '***************************************
********'
> GO
> I have created my certificate:
> create certificate cert_sk_admin with subject = 'Certificate for accessing
> symmetric keys';
> GO
> I have created my symmetric key:
> create symmetric key sk_CreditCard with algorithm = aes_128 encryption by
> certificate cert_sk_admin;
> GO
>
> I have added a new column to the credit card table (lets call it CCNO_Enc.
> The existing column is CCNO).
> I then update the new column:
> UPDATE CREDITCARD
> SET CCNO_Enc = EncryptByKey(Key_GUID('sk_CreditCard'), CCNO)
> GO
> The Problem:
> If I now query this table (select CCNO, CCNO_Enc FROM CREDITCARD WHERE
> CCNO = '123456789'), I get two records back (there are two rows with this
> CCNO). The problem is that the encrypted value of the two rows is
> different? How can this be if the source value is the same. Our problem
> with this is that searching on an encrypted column is very slow(when
> performing the decryption). We would like to encrypt the search criteria
> and use that to do a direct comparison on the encrypted field (without
> decrypting it in the where clause)
> The Questions:
> 1. Why are the encrypted values different?
> 2. Can this be changed?
> Thanks
> Craig
>

Sunday, February 19, 2012

enable advanced performance - no UPS

Hi!
I just enabled disk cache and advanced performance on our SQL-server
machine's disks. The computer doesn't have any UPS, so I guess I could be in
trouble if there is a power outage. But how serious is this? The machine is
a dedicated sql server which mainly serves as a development machine. We do
however store some license information in the sql-server, that database is
however backed up on a regular basis.
Which are the worst case scenario?
- Re-install Win. Serv. 2003?
- Re-install SQL-server?
- Some corrupted/lost datarows in the database if someone worked on it when
the outage happened?
- Complete loss of all data?
- Unusable database files?
Regards,
Peter
hi Peter,
I consider that "complete loss of all data" is the worst situation for
anyone, but i was thinking about UPS...Listen to me, even in Spain, lots of
organizations own an UPS, I can't believe it!! It's cheaper, isn't?
current location: alicante (es)
"Peter Hartlén" wrote:

> Hi!
> I just enabled disk cache and advanced performance on our SQL-server
> machine's disks. The computer doesn't have any UPS, so I guess I could be in
> trouble if there is a power outage. But how serious is this? The machine is
> a dedicated sql server which mainly serves as a development machine. We do
> however store some license information in the sql-server, that database is
> however backed up on a regular basis.
> Which are the worst case scenario?
> - Re-install Win. Serv. 2003?
> - Re-install SQL-server?
> - Some corrupted/lost datarows in the database if someone worked on it when
> the outage happened?
> - Complete loss of all data?
> - Unusable database files?
> Regards,
> Peter
>
>
|||"Peter Hartln" <peter@.data.se> wrote in message
news:OE9flvDSGHA.4384@.tk2msftngp13.phx.gbl...
> Hi!
> I just enabled disk cache and advanced performance on our SQL-server
> machine's disks. The computer doesn't have any UPS, so I guess I could be
> in trouble if there is a power outage. But how serious is this? The
> machine is a dedicated sql server which mainly serves as a development
> machine. We do however store some license information in the sql-server,
> that database is however backed up on a regular basis.
> Which are the worst case scenario?
> - Re-install Win. Serv. 2003?
> - Re-install SQL-server?
> - Some corrupted/lost datarows in the database if someone worked on it
> when the outage happened?
> - Complete loss of all data?
> - Unusable database files?
>
Realistically, the worst case is that you will need to recover your
databsaes from backup.
However, you should NEVER do this either in production or development. The
reasons for doing it in production are obvious.
In development turning on write cashing on disks badly distorts the
performance characteristics of the databsae by eliminating the cost of log
flushing. This can leave perforance problems to arise in production when
you wonder why trying to do 2000 single-row inserts without a transaction is
slow.
David
|||"David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> skrev i
meddelandet news:Oj8LdKGSGHA.4920@.tk2msftngp13.phx.gbl...
> Realistically, the worst case is that you will need to recover your
> databsaes from backup.
> However, you should NEVER do this either in production or development.
> The reasons for doing it in production are obvious.
> In development turning on write cashing on disks badly distorts the
> performance characteristics of the databsae by eliminating the cost of log
> flushing. This can leave perforance problems to arise in production when
> you wonder why trying to do 2000 single-row inserts without a transaction
> is slow.
> David
I'm not sure I follow, you say "caching distorts the performance
characteristics of the database", but a 1.5min compared to 12min import
(almost 10 times slower) quite clearly indicates that there is a huge
performance benefit using caching.
Are you saying that after a while, the performance will decrease when using
caching because it messes up the log flushing? Are you even saying caching
invalidates the transaction log characteristics of a database?
Best regards,
Peter
|||"Peter Hartln" <peter@.data.se> wrote in message
news:uqUda1ZSGHA.1688@.TK2MSFTNGP11.phx.gbl...
> "David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> skrev i
> meddelandet news:Oj8LdKGSGHA.4920@.tk2msftngp13.phx.gbl...
> I'm not sure I follow, you say "caching distorts the performance
> characteristics of the database", but a 1.5min compared to 12min import
> (almost 10 times slower) quite clearly indicates that there is a huge
> performance benefit using caching.
No that's a design flaw in your application. See below.

> Are you saying that after a while, the performance will decrease when
> using caching because it messes up the log flushing?
No.

> Are you even saying caching invalidates the transaction log
> characteristics of a database?
Yes. And you generally will not be able to use such caching in production.
So you shouldn't in development.
If you see a 10x difference with write caching, you probably have an
application problem. The most common such problem is flushing the
transaction log too often. You can only flush the log so many times a
second. If you insist on flushing the log after each row of the import
(which is what happens when you don't use a transaction), then you will
severly limit the throughput of your application.
You might well miss this design flaw on a development system which has write
caching enabled.
David
|||Hi David, thanks for your reply!

> If you see a 10x difference with write caching, you probably have an
> application problem. The most common such problem is flushing the
> transaction log too often. You can only flush the log so many times a
> second. If you insist on flushing the log after each row of the import
> (which is what happens when you don't use a transaction), then you will
> severly limit the throughput of your application.
> You might well miss this design flaw on a development system which has
> write caching enabled.
>
I am not an expert when it comes to writing transactional database code, but
I think I have a fairly good grip on the basic functionality.
I know my development machine (using MSDE) has write cache, but our
testserver didn't. My code, starts a transaction at the beginning of the
import and commits it at the end of the transaction (not using any nested
transactions, should I?), unless something went wrong.
I am not sure what you mean by flushing the log, I only start a transaction
and commit it.
The test was performed on the testserver, without, and later on with, write
cache enabled. The code never changed, nor did the hardware itself, only the
write cache, and I got a 10x improvment when enabling write cache.
Perhaps my "single transaction of the entire import" is bad practice, should
I use one large and many smaller transactions during the import?
Are you saying write cache shouldn't impose a 10x performance benefit on a
SQL-server?
Thanks,
Peter
|||"Peter Hartln" <peter@.data.se> wrote in message
news:eOqfoXZTGHA.5500@.TK2MSFTNGP12.phx.gbl...
> Hi David, thanks for your reply!
>
> I am not an expert when it comes to writing transactional database code,
> but I think I have a fairly good grip on the basic functionality.
> I know my development machine (using MSDE) has write cache, but our
> testserver didn't. My code, starts a transaction at the beginning of the
> import and commits it at the end of the transaction (not using any nested
> transactions, should I?), unless something went wrong.
> I am not sure what you mean by flushing the log, I only start a
> transaction and commit it.
> The test was performed on the testserver, without, and later on with,
> write cache enabled. The code never changed, nor did the hardware itself,
> only the write cache, and I got a 10x improvment when enabling write
> cache.
> Perhaps my "single transaction of the entire import" is bad practice,
> should I use one large and many smaller transactions during the import?
No. One single transaction is just right. The mistake most people make is
omiting the transaction, and letting each statement commit by itself.

> Are you saying write cache shouldn't impose a 10x performance benefit on a
> SQL-server?
Yes. The only time write caching should makes a huge difference is when the
application is flushing the log (commiting transactions) too often.
What sort of performance numbers are you seeing? How are you doing the
import.
David
|||"David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
message news:%23EZWYAdTGHA.736@.TK2MSFTNGP12.phx.gbl...
> "Peter Hartln" <peter@.data.se> wrote in message
> news:eOqfoXZTGHA.5500@.TK2MSFTNGP12.phx.gbl...
> No. One single transaction is just right. The mistake most people make
> is omiting the transaction, and letting each statement commit by itself.
>
> Yes. The only time write caching should makes a huge difference is when
> the application is flushing the log (commiting transactions) too often.
> What sort of performance numbers are you seeing? How are you doing the
> import.
>
Let me explain a bit more.
When when a user needs to make a large number of changes to a database, as
in an import, there are four different places the changes have to be made:
The database pages in memory, the database files, the log records in memory
and the log file. The changes are made immediately to the database pages in
memory and the log records in memory. Background processes will then write
(or "flush") the changes to the files on disk. When you commit a
transaction, you must wait for any changes made to the log records in memory
to be flushed to disk.
If you are commiting after every statement, which is what happens if you
don't use an explicit transaction, then you must wait for the log file to be
written after each statement. This is the typical case where using write
caching on the disk will make a big difference.
If you do use a transaction, then the changes to the database and log in
memory are written out by background processes. These background processes
don't really benefit from write caching on the disks since they are writing
out large amounts of data. So if you have good transaction scoping, I am
surprised that you see a big performance difference with write caching
enabled.
David
|||Do I fill stupid now or what, I was so sure I hade the file read operation
as a transaction, but it was the second operation, when updating the main
tables with the data in the temporary tables.
Adding transaction to the file read operation improved the import time to
near 2min compared to 12min.
Thanks a lot for you patience and explanations David!
Regards,
Peter
"David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> skrev i
meddelandet news:e5NXJMdTGHA.5884@.TK2MSFTNGP14.phx.gbl...
> "David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
> message news:%23EZWYAdTGHA.736@.TK2MSFTNGP12.phx.gbl...
>
> Let me explain a bit more.
> When when a user needs to make a large number of changes to a database, as
> in an import, there are four different places the changes have to be made:
> The database pages in memory, the database files, the log records in
> memory and the log file. The changes are made immediately to the database
> pages in memory and the log records in memory. Background processes will
> then write (or "flush") the changes to the files on disk. When you commit
> a transaction, you must wait for any changes made to the log records in
> memory to be flushed to disk.
> If you are commiting after every statement, which is what happens if you
> don't use an explicit transaction, then you must wait for the log file to
> be written after each statement. This is the typical case where using
> write caching on the disk will make a big difference.
> If you do use a transaction, then the changes to the database and log in
> memory are written out by background processes. These background
> processes don't really benefit from write caching on the disks since they
> are writing out large amounts of data. So if you have good transaction
> scoping, I am surprised that you see a big performance difference with
> write caching enabled.
> David
>

enable advanced performance - no UPS

Hi!
I just enabled disk cache and advanced performance on our SQL-server
machine's disks. The computer doesn't have any UPS, so I guess I could be in
trouble if there is a power outage. But how serious is this? The machine is
a dedicated sql server which mainly serves as a development machine. We do
however store some license information in the sql-server, that database is
however backed up on a regular basis.
Which are the worst case scenario?
- Re-install Win. Serv. 2003?
- Re-install SQL-server?
- Some corrupted/lost datarows in the database if someone worked on it when
the outage happened?
- Complete loss of all data?
- Unusable database files?
Regards,
Peterhi Peter,
I consider that "complete loss of all data" is the worst situation for
anyone, but i was thinking about UPS...Listen to me, even in Spain, lots of
organizations own an UPS, I can't believe it!! It's cheaper, isn't?
current location: alicante (es)
"Peter Hartlén" wrote:

> Hi!
> I just enabled disk cache and advanced performance on our SQL-server
> machine's disks. The computer doesn't have any UPS, so I guess I could be
in
> trouble if there is a power outage. But how serious is this? The machine i
s
> a dedicated sql server which mainly serves as a development machine. We do
> however store some license information in the sql-server, that database is
> however backed up on a regular basis.
> Which are the worst case scenario?
> - Re-install Win. Serv. 2003?
> - Re-install SQL-server?
> - Some corrupted/lost datarows in the database if someone worked on it whe
n
> the outage happened?
> - Complete loss of all data?
> - Unusable database files?
> Regards,
> Peter
>
>|||"Peter Hartln" <peter@.data.se> wrote in message
news:OE9flvDSGHA.4384@.tk2msftngp13.phx.gbl...
> Hi!
> I just enabled disk cache and advanced performance on our SQL-server
> machine's disks. The computer doesn't have any UPS, so I guess I could be
> in trouble if there is a power outage. But how serious is this? The
> machine is a dedicated sql server which mainly serves as a development
> machine. We do however store some license information in the sql-server,
> that database is however backed up on a regular basis.
> Which are the worst case scenario?
> - Re-install Win. Serv. 2003?
> - Re-install SQL-server?
> - Some corrupted/lost datarows in the database if someone worked on it
> when the outage happened?
> - Complete loss of all data?
> - Unusable database files?
>
Realistically, the worst case is that you will need to recover your
databsaes from backup.
However, you should NEVER do this either in production or development. The
reasons for doing it in production are obvious.
In development turning on write cashing on disks badly distorts the
performance characteristics of the databsae by eliminating the cost of log
flushing. This can leave perforance problems to arise in production when
you wonder why trying to do 2000 single-row inserts without a transaction is
slow.
David|||"David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> skrev i
meddelandet news:Oj8LdKGSGHA.4920@.tk2msftngp13.phx.gbl...
> Realistically, the worst case is that you will need to recover your
> databsaes from backup.
> However, you should NEVER do this either in production or development.
> The reasons for doing it in production are obvious.
> In development turning on write cashing on disks badly distorts the
> performance characteristics of the databsae by eliminating the cost of log
> flushing. This can leave perforance problems to arise in production when
> you wonder why trying to do 2000 single-row inserts without a transaction
> is slow.
> David
I'm not sure I follow, you say "caching distorts the performance
characteristics of the database", but a 1.5min compared to 12min import
(almost 10 times slower) quite clearly indicates that there is a huge
performance benefit using caching.
Are you saying that after a while, the performance will decrease when using
caching because it messes up the log flushing? Are you even saying caching
invalidates the transaction log characteristics of a database?
Best regards,
Peter|||"Peter Hartln" <peter@.data.se> wrote in message
news:uqUda1ZSGHA.1688@.TK2MSFTNGP11.phx.gbl...
> "David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> skrev i
> meddelandet news:Oj8LdKGSGHA.4920@.tk2msftngp13.phx.gbl...
> I'm not sure I follow, you say "caching distorts the performance
> characteristics of the database", but a 1.5min compared to 12min import
> (almost 10 times slower) quite clearly indicates that there is a huge
> performance benefit using caching.
No that's a design flaw in your application. See below.

> Are you saying that after a while, the performance will decrease when
> using caching because it messes up the log flushing?
No.

> Are you even saying caching invalidates the transaction log
> characteristics of a database?
Yes. And you generally will not be able to use such caching in production.
So you shouldn't in development.
If you see a 10x difference with write caching, you probably have an
application problem. The most common such problem is flushing the
transaction log too often. You can only flush the log so many times a
second. If you insist on flushing the log after each row of the import
(which is what happens when you don't use a transaction), then you will
severly limit the throughput of your application.
You might well miss this design flaw on a development system which has write
caching enabled.
David|||Hi David, thanks for your reply!

> If you see a 10x difference with write caching, you probably have an
> application problem. The most common such problem is flushing the
> transaction log too often. You can only flush the log so many times a
> second. If you insist on flushing the log after each row of the import
> (which is what happens when you don't use a transaction), then you will
> severly limit the throughput of your application.
> You might well miss this design flaw on a development system which has
> write caching enabled.
>
I am not an expert when it comes to writing transactional database code, but
I think I have a fairly good grip on the basic functionality.
I know my development machine (using MSDE) has write cache, but our
testserver didn't. My code, starts a transaction at the beginning of the
import and commits it at the end of the transaction (not using any nested
transactions, should I?), unless something went wrong.
I am not sure what you mean by flushing the log, I only start a transaction
and commit it.
The test was performed on the testserver, without, and later on with, write
cache enabled. The code never changed, nor did the hardware itself, only the
write cache, and I got a 10x improvment when enabling write cache.
Perhaps my "single transaction of the entire import" is bad practice, should
I use one large and many smaller transactions during the import?
Are you saying write cache shouldn't impose a 10x performance benefit on a
SQL-server?
Thanks,
Peter|||"Peter Hartln" <peter@.data.se> wrote in message
news:eOqfoXZTGHA.5500@.TK2MSFTNGP12.phx.gbl...
> Hi David, thanks for your reply!
>
> I am not an expert when it comes to writing transactional database code,
> but I think I have a fairly good grip on the basic functionality.
> I know my development machine (using MSDE) has write cache, but our
> testserver didn't. My code, starts a transaction at the beginning of the
> import and commits it at the end of the transaction (not using any nested
> transactions, should I?), unless something went wrong.
> I am not sure what you mean by flushing the log, I only start a
> transaction and commit it.
> The test was performed on the testserver, without, and later on with,
> write cache enabled. The code never changed, nor did the hardware itself,
> only the write cache, and I got a 10x improvment when enabling write
> cache.
> Perhaps my "single transaction of the entire import" is bad practice,
> should I use one large and many smaller transactions during the import?
No. One single transaction is just right. The mistake most people make is
omiting the transaction, and letting each statement commit by itself.

> Are you saying write cache shouldn't impose a 10x performance benefit on a
> SQL-server?
Yes. The only time write caching should makes a huge difference is when the
application is flushing the log (commiting transactions) too often.
What sort of performance numbers are you seeing? How are you doing the
import.
David|||"David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
message news:%23EZWYAdTGHA.736@.TK2MSFTNGP12.phx.gbl...
> "Peter Hartln" <peter@.data.se> wrote in message
> news:eOqfoXZTGHA.5500@.TK2MSFTNGP12.phx.gbl...
> No. One single transaction is just right. The mistake most people make
> is omiting the transaction, and letting each statement commit by itself.
>
> Yes. The only time write caching should makes a huge difference is when
> the application is flushing the log (commiting transactions) too often.
> What sort of performance numbers are you seeing? How are you doing the
> import.
>
Let me explain a bit more.
When when a user needs to make a large number of changes to a database, as
in an import, there are four different places the changes have to be made:
The database pages in memory, the database files, the log records in memory
and the log file. The changes are made immediately to the database pages in
memory and the log records in memory. Background processes will then write
(or "flush") the changes to the files on disk. When you commit a
transaction, you must wait for any changes made to the log records in memory
to be flushed to disk.
If you are commiting after every statement, which is what happens if you
don't use an explicit transaction, then you must wait for the log file to be
written after each statement. This is the typical case where using write
caching on the disk will make a big difference.
If you do use a transaction, then the changes to the database and log in
memory are written out by background processes. These background processes
don't really benefit from write caching on the disks since they are writing
out large amounts of data. So if you have good transaction scoping, I am
surprised that you see a big performance difference with write caching
enabled.
David|||Do I fill stupid now or what, I was so sure I hade the file read operation
as a transaction, but it was the second operation, when updating the main
tables with the data in the temporary tables.
Adding transaction to the file read operation improved the import time to
near 2min compared to 12min.
Thanks a lot for you patience and explanations David!
Regards,
Peter
"David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> skrev i
meddelandet news:e5NXJMdTGHA.5884@.TK2MSFTNGP14.phx.gbl...
> "David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
> message news:%23EZWYAdTGHA.736@.TK2MSFTNGP12.phx.gbl...
>
> Let me explain a bit more.
> When when a user needs to make a large number of changes to a database, as
> in an import, there are four different places the changes have to be made:
> The database pages in memory, the database files, the log records in
> memory and the log file. The changes are made immediately to the database
> pages in memory and the log records in memory. Background processes will
> then write (or "flush") the changes to the files on disk. When you commit
> a transaction, you must wait for any changes made to the log records in
> memory to be flushed to disk.
> If you are commiting after every statement, which is what happens if you
> don't use an explicit transaction, then you must wait for the log file to
> be written after each statement. This is the typical case where using
> write caching on the disk will make a big difference.
> If you do use a transaction, then the changes to the database and log in
> memory are written out by background processes. These background
> processes don't really benefit from write caching on the disks since they
> are writing out large amounts of data. So if you have good transaction
> scoping, I am surprised that you see a big performance difference with
> write caching enabled.
> David
>

enable a trigger

Hi!

I know that you can enable/disable a trigger through the alter
table command. Once a trigger is disabled, is there a way
to determine whether it is disabled or enabled? I see there is a
status column in sysobjects ...is that what I should be using?
thanks in advance for any help
KarenKaren (kjphipps_357@.hotmail.com) writes:
> I know that you can enable/disable a trigger through the alter
> table command. Once a trigger is disabled, is there a way
> to determine whether it is disabled or enabled? I see there is a
> status column in sysobjects ...is that what I should be using?
> thanks in advance for any help

No. You should use objectproperty(id, 'ExecTriggerIsDisabled')

Note: I typed property from memory. Look up objectproperty in Books
Online to verify that I got it right.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp

Wednesday, February 15, 2012

Empty Log files and Data files

Hi!
I'm using replication with two database on SQL 2000,when begin, the log files size is 50mb and the data files size is 150mb. But now the log files size is 2Gb and the data files size is 4Gb. I would like to decrease the log files and the data files ? How do i do this?
(I using Truncate and shrink doesn't change )
Thanks!!!For log file, after replicating, take a full backup of your database, then truncate log.. This must reduce the log file..

I have no other idea about reducing the data file size.

Regards|||Is this on the source, destination or distribution?
Backup the log or change the recovery model to simple then shrink the log.

For data - shrink the data file.

Are you sure that the data is replicating and it's not building up because transactions can't be cleared?
That would end up with a lot of data in the distribution database and log space used.