Hi!
I just enabled disk cache and advanced performance on our SQL-server
machine's disks. The computer doesn't have any UPS, so I guess I could be in
trouble if there is a power outage. But how serious is this? The machine is
a dedicated sql server which mainly serves as a development machine. We do
however store some license information in the sql-server, that database is
however backed up on a regular basis.
Which are the worst case scenario?
- Re-install Win. Serv. 2003?
- Re-install SQL-server?
- Some corrupted/lost datarows in the database if someone worked on it when
the outage happened?
- Complete loss of all data?
- Unusable database files?
Regards,
Peter
hi Peter,
I consider that "complete loss of all data" is the worst situation for
anyone, but i was thinking about UPS...Listen to me, even in Spain, lots of
organizations own an UPS, I can't believe it!! It's cheaper, isn't?
current location: alicante (es)
"Peter Hartlén" wrote:
> Hi!
> I just enabled disk cache and advanced performance on our SQL-server
> machine's disks. The computer doesn't have any UPS, so I guess I could be in
> trouble if there is a power outage. But how serious is this? The machine is
> a dedicated sql server which mainly serves as a development machine. We do
> however store some license information in the sql-server, that database is
> however backed up on a regular basis.
> Which are the worst case scenario?
> - Re-install Win. Serv. 2003?
> - Re-install SQL-server?
> - Some corrupted/lost datarows in the database if someone worked on it when
> the outage happened?
> - Complete loss of all data?
> - Unusable database files?
> Regards,
> Peter
>
>
|||"Peter Hartln" <peter@.data.se> wrote in message
news:OE9flvDSGHA.4384@.tk2msftngp13.phx.gbl...
> Hi!
> I just enabled disk cache and advanced performance on our SQL-server
> machine's disks. The computer doesn't have any UPS, so I guess I could be
> in trouble if there is a power outage. But how serious is this? The
> machine is a dedicated sql server which mainly serves as a development
> machine. We do however store some license information in the sql-server,
> that database is however backed up on a regular basis.
> Which are the worst case scenario?
> - Re-install Win. Serv. 2003?
> - Re-install SQL-server?
> - Some corrupted/lost datarows in the database if someone worked on it
> when the outage happened?
> - Complete loss of all data?
> - Unusable database files?
>
Realistically, the worst case is that you will need to recover your
databsaes from backup.
However, you should NEVER do this either in production or development. The
reasons for doing it in production are obvious.
In development turning on write cashing on disks badly distorts the
performance characteristics of the databsae by eliminating the cost of log
flushing. This can leave perforance problems to arise in production when
you wonder why trying to do 2000 single-row inserts without a transaction is
slow.
David
|||"David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> skrev i
meddelandet news:Oj8LdKGSGHA.4920@.tk2msftngp13.phx.gbl...
> Realistically, the worst case is that you will need to recover your
> databsaes from backup.
> However, you should NEVER do this either in production or development.
> The reasons for doing it in production are obvious.
> In development turning on write cashing on disks badly distorts the
> performance characteristics of the databsae by eliminating the cost of log
> flushing. This can leave perforance problems to arise in production when
> you wonder why trying to do 2000 single-row inserts without a transaction
> is slow.
> David
I'm not sure I follow, you say "caching distorts the performance
characteristics of the database", but a 1.5min compared to 12min import
(almost 10 times slower) quite clearly indicates that there is a huge
performance benefit using caching.
Are you saying that after a while, the performance will decrease when using
caching because it messes up the log flushing? Are you even saying caching
invalidates the transaction log characteristics of a database?
Best regards,
Peter
|||"Peter Hartln" <peter@.data.se> wrote in message
news:uqUda1ZSGHA.1688@.TK2MSFTNGP11.phx.gbl...
> "David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> skrev i
> meddelandet news:Oj8LdKGSGHA.4920@.tk2msftngp13.phx.gbl...
> I'm not sure I follow, you say "caching distorts the performance
> characteristics of the database", but a 1.5min compared to 12min import
> (almost 10 times slower) quite clearly indicates that there is a huge
> performance benefit using caching.
No that's a design flaw in your application. See below.
> Are you saying that after a while, the performance will decrease when
> using caching because it messes up the log flushing?
No.
> Are you even saying caching invalidates the transaction log
> characteristics of a database?
Yes. And you generally will not be able to use such caching in production.
So you shouldn't in development.
If you see a 10x difference with write caching, you probably have an
application problem. The most common such problem is flushing the
transaction log too often. You can only flush the log so many times a
second. If you insist on flushing the log after each row of the import
(which is what happens when you don't use a transaction), then you will
severly limit the throughput of your application.
You might well miss this design flaw on a development system which has write
caching enabled.
David
|||Hi David, thanks for your reply!
> If you see a 10x difference with write caching, you probably have an
> application problem. The most common such problem is flushing the
> transaction log too often. You can only flush the log so many times a
> second. If you insist on flushing the log after each row of the import
> (which is what happens when you don't use a transaction), then you will
> severly limit the throughput of your application.
> You might well miss this design flaw on a development system which has
> write caching enabled.
>
I am not an expert when it comes to writing transactional database code, but
I think I have a fairly good grip on the basic functionality.
I know my development machine (using MSDE) has write cache, but our
testserver didn't. My code, starts a transaction at the beginning of the
import and commits it at the end of the transaction (not using any nested
transactions, should I?), unless something went wrong.
I am not sure what you mean by flushing the log, I only start a transaction
and commit it.
The test was performed on the testserver, without, and later on with, write
cache enabled. The code never changed, nor did the hardware itself, only the
write cache, and I got a 10x improvment when enabling write cache.
Perhaps my "single transaction of the entire import" is bad practice, should
I use one large and many smaller transactions during the import?
Are you saying write cache shouldn't impose a 10x performance benefit on a
SQL-server?
Thanks,
Peter
|||"Peter Hartln" <peter@.data.se> wrote in message
news:eOqfoXZTGHA.5500@.TK2MSFTNGP12.phx.gbl...
> Hi David, thanks for your reply!
>
> I am not an expert when it comes to writing transactional database code,
> but I think I have a fairly good grip on the basic functionality.
> I know my development machine (using MSDE) has write cache, but our
> testserver didn't. My code, starts a transaction at the beginning of the
> import and commits it at the end of the transaction (not using any nested
> transactions, should I?), unless something went wrong.
> I am not sure what you mean by flushing the log, I only start a
> transaction and commit it.
> The test was performed on the testserver, without, and later on with,
> write cache enabled. The code never changed, nor did the hardware itself,
> only the write cache, and I got a 10x improvment when enabling write
> cache.
> Perhaps my "single transaction of the entire import" is bad practice,
> should I use one large and many smaller transactions during the import?
No. One single transaction is just right. The mistake most people make is
omiting the transaction, and letting each statement commit by itself.
> Are you saying write cache shouldn't impose a 10x performance benefit on a
> SQL-server?
Yes. The only time write caching should makes a huge difference is when the
application is flushing the log (commiting transactions) too often.
What sort of performance numbers are you seeing? How are you doing the
import.
David
|||"David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
message news:%23EZWYAdTGHA.736@.TK2MSFTNGP12.phx.gbl...
> "Peter Hartln" <peter@.data.se> wrote in message
> news:eOqfoXZTGHA.5500@.TK2MSFTNGP12.phx.gbl...
> No. One single transaction is just right. The mistake most people make
> is omiting the transaction, and letting each statement commit by itself.
>
> Yes. The only time write caching should makes a huge difference is when
> the application is flushing the log (commiting transactions) too often.
> What sort of performance numbers are you seeing? How are you doing the
> import.
>
Let me explain a bit more.
When when a user needs to make a large number of changes to a database, as
in an import, there are four different places the changes have to be made:
The database pages in memory, the database files, the log records in memory
and the log file. The changes are made immediately to the database pages in
memory and the log records in memory. Background processes will then write
(or "flush") the changes to the files on disk. When you commit a
transaction, you must wait for any changes made to the log records in memory
to be flushed to disk.
If you are commiting after every statement, which is what happens if you
don't use an explicit transaction, then you must wait for the log file to be
written after each statement. This is the typical case where using write
caching on the disk will make a big difference.
If you do use a transaction, then the changes to the database and log in
memory are written out by background processes. These background processes
don't really benefit from write caching on the disks since they are writing
out large amounts of data. So if you have good transaction scoping, I am
surprised that you see a big performance difference with write caching
enabled.
David
|||Do I fill stupid now or what, I was so sure I hade the file read operation
as a transaction, but it was the second operation, when updating the main
tables with the data in the temporary tables.
Adding transaction to the file read operation improved the import time to
near 2min compared to 12min.
Thanks a lot for you patience and explanations David!
Regards,
Peter
"David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> skrev i
meddelandet news:e5NXJMdTGHA.5884@.TK2MSFTNGP14.phx.gbl...
> "David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
> message news:%23EZWYAdTGHA.736@.TK2MSFTNGP12.phx.gbl...
>
> Let me explain a bit more.
> When when a user needs to make a large number of changes to a database, as
> in an import, there are four different places the changes have to be made:
> The database pages in memory, the database files, the log records in
> memory and the log file. The changes are made immediately to the database
> pages in memory and the log records in memory. Background processes will
> then write (or "flush") the changes to the files on disk. When you commit
> a transaction, you must wait for any changes made to the log records in
> memory to be flushed to disk.
> If you are commiting after every statement, which is what happens if you
> don't use an explicit transaction, then you must wait for the log file to
> be written after each statement. This is the typical case where using
> write caching on the disk will make a big difference.
> If you do use a transaction, then the changes to the database and log in
> memory are written out by background processes. These background
> processes don't really benefit from write caching on the disks since they
> are writing out large amounts of data. So if you have good transaction
> scoping, I am surprised that you see a big performance difference with
> write caching enabled.
> David
>
No comments:
Post a Comment