Tuesday, March 27, 2012
Encryption; SQL Server 2005 & Windows 2003 Server
Pro EFS:
Indexs, Primary Keys, Foreign Keys, DEFAULTS, CHECK CONSTRAINTS are preserve
d.
Databases modifications need not consider Encryption.
Patterns & Practices
http://msdn.microsoft.com/library/d...
h05.asp
http://msdn.microsoft.com/library/d...r />
MCh18.asp
http://msdn.microsoft.com/library/d.../>
SecDBSe.asp
Other Technical Articles
http://www.microsoft.com/technet/pr...0.mspx?mfr=true
http://www.microsoft.com/technet/pr...fr=
true
http://www.microsoft.com/technet/ar...n/sp3sec02.mspx
http://www.microsoft.com/technet/pr...n/sp3sec04.mspx
http://www.microsoft.com/technet/pr...y/sqlorcle.mspx
http://www.microsoft.com/technet/se...phyetc/efs.mspx
http://www.sqlservercentral.com/col...menting_efs.asp
http://www.microsoft.com/technet/pr...5/multisec.mspx
http://www.akadia.com/services/sqls...l#_Toc513865376
http://www.sans.org/top20/2002/mssql_checklist.pdf
Case Study
http://www.microsoft.com/canada/cas...worksafebc.mspx
Anti-EFS:
1. If the file is not created in an Encrypted Directory the temporary fil
e
created by EFS during encryption remains in clear-text and is
vulnerable.
a) cipher.exe /W must be used to Wipe the temporary file.
2. EFS will not function in a Clustered Environment.
3. If the Server crashes when an Encrypted File is open the pagefile.sys
will
contain vulnerable clear-text of the Encrypted File on restart.
4. The Windows Administrator(s) can "Set Password ..." of the Key Owner
and the Key Owner will not be able to access the data.
5. If the Key Owner does not specify a Data Recovery Agent (DRA) AND does
not backup the PKI the data might become inaccessible under
circumstances
such as "4." above.
6. Encrypted Files cannot be backed up to non-NTFS devices except with
Windows
Backup utilities.
7. Extra steps must be taken over-and-above conventional SQL Server Backu
p,
Recovery and Disaster Recovery procedures.
8. The Windows Administrator can access the (otherwise) encrypted data if
SQL Server "BUILTIN\Administrators" is not removed.
9. The Database *.mdf & *.ldf files cannot be moved between domains and
retain
the Encrypted Attribute.
10. Stealing a local account password is easy using common hacker tools in
standalone mode.
11. Encrypted files stored on file servers are decrypted on the server
and then
transported in clear text across the network to the user's workstation.
Because EFS needs access to the user's private key, which is held in
the
profile, the server must be "trusted for delegation" and have access to
the user's local profile.
a) Requires IPSec to secure the file transfer between file server
and
user machine.
12. "The EnCase EFS Module provides Encrypting File System (EFS) folder a
nd
file decryption capabilities, for locally authenticated users."
(http://www.digitalintelligence.com/...oftware/encase/)"ITContractor" <ITContractor@.discussions.microsoft.com> wrote in message
news:16785415-7A45-4031-A599-86896233DC15@.microsoft.com...
> Any further input would be appreciated ...
> Pro EFS:
>
. . .
> 8. The Windows Administrator can access the (otherwise) encrypted data
> if
> SQL Server "BUILTIN\Administrators" is not removed.
Although this is true, removing the BUILTIN\Administrators account is no
protection against a Windows admin. A windows admin on the box can shut
down SQL Server, replace the Master database, restart and attach your
encrypted database. Or just restart the instance in single-user mode.
David
Wednesday, March 7, 2012
Encourage use of UDFs instead of Procedures when possible?
Over the past few months one of the issues that repeats in this forum has to do with the use of stored procedure output as input to other procedures. One method of dealing with this is to convert the stored procedure to a function if possible and use the function for the process.
I was wandering if it would be a good idea to change my database standards to reflect a preference to use functions over stored procedures -- especially in a read-only context. I realize that in some cases this might cause performance problems, but I sill want to recruit opinions on this issue.
Anybody?
Dont do it....
Functions will use row by row operations (Like a cursor)
sprocs will perform set-based operations.
we are dealing with that at my current position. Removing UDFs in favor of subqueries or sproc calls.
|||Thank you.|||
UDFs come in 3 flavors - scalar UDFs, inline table-valued functions and non-inline table-valued functions. The performance of the TVFs goes in the following order:
#1. Inline TVFs - Since the TVF definition is parsed into the query, compiled and optimized. This is similar to how views are resolved in a query
#2. Non-inline TVFs - Cost and Row estimates will be inaccurate due to use of table variable. So plan choices are limited and performance will suffer in most cases
Next, the scalar UDFs are a different beast altogether. In general, it should be avoided in queries (WHERE clause, SELECT list etc) since it introduces performance problems and bad plans. Often inlining the expression in the scalar UDFs in queries will give orders of magnitude performance benefits and better plans. In SQL Server 2005, you can use WITH SCHEMABINDING on scalar UDFs so that the query optimizer knows whether a UDF is performing data access or not. This can result in slightly better performance. The main thing to note about UDFs is that you are doing a row-by-row call in queries and there is lot of overhead to that. For CPU and calculation intensive scalar UDFs, you should consider migrating to CLR UDFs since those are compiled and provide better performance for the complex ones.
So the choice of converting the SP logic to UDFs should consider above points. If you want to reuse results then consider moving the queries to views or inline TVFs. If you want to reuse the logic and calculations then try the CLR UDFs for complex calculations or just inline the calculations in queries for best performance.
|||Thank you
Umachandar:
You stated, "... The main thing to note about UDFs is that you are doing a row-by-row call in queries and there is lot of overhead to that. ..." I want to make sure I clearly understand this. You mean this for all function types or is this quote only in the context of scalar UDFs?
|||It is true for scalar UDFs. But depending on the query plan, you will encounted similar situation for multi-statement TVFs. For example, this will happen if you have a nested loop join. In this case, you will end up querying the TVF for each row in the input and this will be bad because there is no statistics on the table variable used in the multi-statement TVF.|||Perfect. Thank you both for your responses.
:-)
Friday, February 17, 2012
Empty table still holding Pages
In a new datamart that was just built there is a worktable for importing new
data. I use DTS to insert the data and then use a delete from statement to
remove the records from the table. Recently the database ran out of space
prematurely and after some investigating I found that with no rows in table
it was still holding space in that datafile. The output from sp_spaceused
worktable is below:
worktable, 0, 571336 KB, 552000 KB, 8 KB, 19328 KB
Also when I run a select * from worktable the query searches the held pages
attempting to return data, even though there are no rows in the table. Has
anyone seen this behavior before? Btw we are using SQL Server 2000 SP3.
--
Jason Strate
j.strate@.comcast.netWhen you fire delete from... statement on the table extent allocation
remains as it is , meaning though the rows gets deleted from the table
allocated extent remains as it is and hence data pages. To remove allocated
extents as well as data you will have run TRUNCATE TABLE command which is
quite faster and does minimally logged operation compare to DELETE FROM
command.
Also to get correct result from sp_spaceused command run DBCC UPDATEUSAGE
... WITH COUNT_ROWS
after any bulk operation like ins/upd/del performed.
To get correct count of rows from the table run query "select count(*) from
table"
--
-Vishal|||Truncate table permissions only default to table owner, members of sysadmin,
db_owner, ddl_admin and is not transferred.
if you are not table owner then you need to be added to ddl_admin/db_owner
database role.
Whereas to run dbcc updateusage you will have to be member of db_owner
databse role
--
-Vishal|||Hello
> In a new datamart that was just built there is a worktable for importing
new
> data. I use DTS to insert the data and then use a delete from statement
to
> remove the records from the table. Recently the database ran out of space
> prematurely and after some investigating I found that with no rows in
table
> it was still holding space in that datafile.
To free space from deleted rows in table you must rebuild or defragment
indexes. To do this you can run:
DBCC INDEXDEFRAG
DBCC REINDEX
DBCC DBREINDEX
CREATE INDEX WITH DROP_EXISTING
You must choose appropriate method according to your conditions.
Serge Shakhov