SQL Server and My Disaster Recovery Experience

How true when they say, you learn best from experience.

Last month, I got a call from a client requesting an onsite visit to check why their SQL Server backups are failing. So I went and immediately ran a full database backup. A dreaded error message greeted me. The following is the error:

10 percent processed.
20 percent processed.
30 percent processed.
Msg 3271, Level 16, State 1, Line 1
A nonrecoverable I/O error occurred on file “K:\DATA\xxxxx.mdf:” 1117(The request could not be performed because of an I/O device error.).
Msg 3013, Level 16, State 1, Line 1
BACKUP DATABASE is terminating abnormally.

I felt a brief terror and the first step that I did in solving the situation was post the error message as a status on my Facebook to convey to my friends how my day is gearing up to be undesirable. How genius of me. Right? 🙂

Then I checked the last known good backup and found out that the server has not been backing up successfully for the last 10 days from when they reported the problem. The client never thought the problem would be critical as the applications are running smoothly uninterrupted since they can remember.

As I investigate the problem, it was concluded that one disk in one RAID 10 array is defective. Worst, the array was meant to store the MDF files. The mirror in the RAID 10 array doesn’t seem to kick in too. The other arrays where the LOGs, the TEMPDBs are located SEEM to be working fine. Hardware wasn’t my expertise so I never bothered to spend time on what went wrong with the hard disks and RAID controllers and decided to just focus on how get the data intact through other means since backing up is failing.

Luckily for me, the TRANSACTION LOGs are doing fine and can be copied. However, as I restored from the last known known good full backup and restored all the transaction log backups that were done right after, it was determined that TRANSACTION LOG backups have stopped too when SQL Server Agent stopped running few days prior. When I attempted to backup the transaction log, I got a similar I/O error. I then remember one of Kimberly Tripps’ PASS SUMMIT videos on disaster recovery. In the video, she demo’d how SQL Server handled when an entire disk volume/array went black. Then hope came on to me. Though no one encounters this every day, and probably no SQL Server professional has encountered this in his/her entire SQL Server career, I felt some degree of hope that the problem I am about to deal with is solvable. The mere fact that the applications depending on the database that can’t be backed up are running perfectly fine tells me something in SQL Server is allowing the database to stay and used online.

My first instincts based on my limited actual experience on disaster and data recovery were to do the following:

  • Never to turn off/restart the server nor restart the services, which I did
  • Ask the client if they can go offline while I resolve the issue, which they allowed so I turned off web server running the applications
  • Find a way to get a full backup of the current state of the database, which I failed
  • Find a way to copy all the associated files (MDFs, LDFS, etc) and having it restored on another SQL Server instance, which I also failed. The MDF file simply just can’t be copied using various tools.
  • Google for relevant info. There were lots but none was as specific to assist me on how to go about it one step at a time
  • Ask fellow SQL Server practitioners about it, but I never came to actually go about this during the whole exercise of data recovery
  • Watch again Kimberly Tripp’s video, which I did and gave me some indirect insights how SQL Server behaves when it is crippled.

What eventually allowed me to recover the data wasn’t accidental but a result of some sound practice in database deployment. There were two things that allowed for recovery:

  1. The LOG files saved the day for me as they happened to be stored on a separate container (disk array) since the day the server was first put up.
  2. SQL Server had ample memory (RAM). It has more than enough memory (RAM) to contain more than twice the size of the database concerned.

So what really happened? How did the storing of the TRANSACTION LOG files on a separate disk array allowed the system to stay good and recoverable? How did memory helped out.

  • SQL Server has the ability to keep data for as long as it can in memory. For read only queries, SQL Server will never bother to read data from data files on disk for as long as it can, provided data aren’t flushed out of memory (e.g. when a need for another memory is requested and there is not enough memory for SQL Server to grant the request another chunk of memory).
  • UPDATES are also reflected on memory and the transaction is recorded in the TRANSACTION LOGs when a database is in Full Recovery Mode. SQL Server does not need to update data files for as long as it can. Data are persisted when TRANSACTION LOGS are backed up in this mode.
  • Coincidentally (well not really as the client followed the advice to have ample memory based on earlier recommendations), the ample amount of memory the SQL Server has allowed the memory manager not to flush out data from memory to accommodate other requests for other data.
  • What appeared to be a problem though was READing from the disks. It appears SQL Server can write to the LOG files but can’t back them up, thus, it allowed SQL Server to continue updating data in memory and record the transactions in the log files allowing the applications to still working. I can do a SELECT * FROM table despite the disk read problems (for SQL Server is getting it from memory). I just can’t backup the database whether full or transactional.

What I eventually did to successfully recover 100% of the critical data are the following:

  • I restored the last known good backup sets (FULL and TRANSACTION LOGS) to another defect-free location
  • I compared the row counts and contents of tables (FROM MEMORY vs LAST KNOWN GOOD BACKUP). Turned out only 2 tables had differences and these tables are the most critical and frequently used ones.
  • I simply copied data from memory to another database located in a defect-free disk array.

That was it. I recovered everything I needed.

After I made sure everything was fully recovered and have the database installed on a new server and have the applications running for a few days, the defective server was decommissioned. Before laying it to rest, I played with it with fear of losing data and knowing it is still essentially an active ‘production’ server.

Here are some experiments I did:

  • Tried various file recovery tools in copying the almost 2GB MDF file to no avail. The attempt always ends up with an I/O error.
  • I tried updating the database, insert new records in the two critical tables and it worked flawlessly as if nothing has gone wrong. I was able to INSERT 100k records with no redflags. I was able to edit 5k records and delete 5k records with no clamor whatsoever.
  • I attempted to try FULL BACKUP one last time and the error is now different. It says:The operating system returned error 21(The device is not ready.) to SQL Server during a write at offset 0x0000000023c000 in file ‘ K:\DATA\xxxxx.mdf ‘. Additional messages in the SQL Server error log and system event log may provide more detail. This is a severe system-level error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online. (Microsoft SQL Server, Error: 823)Apparently, the array can’t be accessed anymore. Maybe I had subjected it with added stress it gave out. But HEY, I can still access the database and do some updates.
  • Lastly, I rebooted the ‘newly decommissioned’ database server and it just won’t start anymore.

Hope I don’t get to experience this again until I retire from doing SQL Server 😀

**************************************************
Toto Gamboa is a consultant specializing on databases, Microsoft SQL Server and software development operating in the Philippines. He is currently a member and one of the leaders of Philippine SQL Server Users Group, a Professional Association for SQL Server (PASS) chapter and is one of Microsoft’s MVP for SQL Server in the Philippines. You may reach him by sending an email to totogamboa@gmail.com

mvp

Advertisement

Performance Tuning : Tiny Queries Help in Identifying Memory Pressure

Often, when given a database server experiencing severe performance problem, our attention is focused more on what is the obvious. And typically, we consider large, complex and long running queries as immediate suspects. We often consider simple, small, and short running queries the least of our worries. In this blog, I’ll show you how these tiny queries help in determining if your SQL Server box is under memory pressure.

However, you have to understand first how SQL Server utilizes memory. In a nutshell, queries that need memory allocation ‘requests’ a certain amount of memory from SQL Server’s memory broker. SQL Server then computes how much memory it can give to the query. Initially, 25% of the query memory is computed and goes through some more belt-tightening schemes to determine the final X amount of memory to grant to the query. When memory is available, the query is immediately granted an X amount of memory and the query executes and returns back the memory right after it has done its work. When memory isn’t available on the other hand, the query is put on hold in a queue until its next turn to do the cycle of ‘requesting’ memory or until it has timed out (few seconds up to 24 hours).

On a server experiencing severe memory pressure where simple, small and short running queries are put on hold for execution due to lack of memory to use, a few lines of query that needs 20KB of memory and used to execute for under a second can take minutes, hours of waiting before it gets its requested memory. Then it continous to execute and finishes its work. It can even timeout if memory isn’t ever granted. Thus, this gives everyone the impression that something is very wrong with the query and/or the server.

If you have these tiny queries put on hold very often by the memory broker, your SQL Server badly needs additional memory (RAM) even if you have 256GB of memory in your machine. If you have these tiny queries timed out after 24 hours of waiting, your server is funny. Though you may have 256GB of RAM in your server, it is simply not enough.

For a server experiencing severe memory pressure, you would get a handful of these tiny queries, which I would personally define as queries needing 2 Megabytes or less of memory to execute, put on hold every 5 seconds. Your circumstances might be different from mine so you can vary this number. For a server with plenty of memory to spare, you would not see these tiny queries put on hold for days or weeks or months or ever.

SQL Server provides a way though to determine if tiny queries are put on hold often. You may capture the results of the following query in a regular interval for days to monitor. I’d usually do 7-day monitoring runs:

SELECT
    COUNT(*) AS small_queries_on_hold
FROM sys.dm_exec_query_memory_grants  
WHERE
    requested_memory_kb <= 2048  — 2 MB
    AND grant_time IS NULL

Do this for every 15 seconds or less depending on how your server is experiencing the pressure. For a quick simple run, you may execute this query in SSMS when you are experiencing severe slowdown and have F5 pressed is regular intervals for a few minutes.

If the result for small_queries_on_hold  is always zero, you may vary/increase requested_memory_kb for up to 10MB. If you get zero result still, you may be experiencing problems other than having memory pressure.

A non-zero result for small_queries_on_hold suggest you may be experiencing severe memory pressure. It means, even small queries have a hard time getting memory grants from SQL Server. This should prompt you right away to request for a purchase of  additional RAM for SQL Server.

You may also want to run the following query to see if there are queries, regardless of memory size requirements, that are put on hold.

SELECT
    COUNT(*) AS small_queries_on_hold
FROM sys.dm_exec_query_memory_grants  
WHERE
   grant_time IS NULL

Once you have upgraded your server’s memory and give more memory to SQL Server, you may want to run the above queries once again to see if there are small queries that are put on hold by SQL Server’s memory broker. Chances are, you would see less occurrences of small_queries_on_hold or you have completely eliminated the occurrence of small queries having put on hold by SQL Server.

Increase your RAM until you don’t see the occurrences or if you can tolerate the number of small_queries_on_hold on a certain time window.

This is all folks. Hope this helps!

**************************************************
Toto Gamboa is a consultant specializing on databases, Microsoft SQL Server and software development operating in the Philippines. He is currently a member and one of the leaders of Philippine SQL Server Users Group, a Professional Association for SQL Server (PASS) chapter and is one of Microsoft’s MVP for SQL Server in the Philippines. You may reach him by sending an email to totogamboa@gmail.com

Making a Case for SQL Server over SQL Database in Azure

When Windows Azure went online some years back, the buzz prompted me to check on it too. But my main reason to check out Microsoft’s cloud solution Azure was Azure’s SQL Database (formerly SQL Azure). Initially, the Azure SQL Database was pretty lame if you start comparing it to its on-premise roots (the Microsoft SQL Server). A SQL Server veteran’s top 10 lists of things he can find in Azure SQL Database would probably be composed of things that SQL Database doesn’t have.

  • no way of backing up databases
  • no automatic database sharding mechanisms
  • no fulltext indexing and seaching
  • and yeah it doesn’t do this
  • and it doesn’t do that
  • etc …

Then Microsoft introduced FEDERATION as a way to make things elastic in terms of scaling database growth and people have hailed the feature as something that will allow Azure SQL Database to make a name of its own as the feature isn’t even available in SQL Server.

Then, one begs to ask if Microsoft could have allowed SQL Server to run on Windows Azure instead. Today, Microsoft gave us that power to provision, install and run SQL Server on an Azure Virtual Machine just like what we do on an in-premise environment. To some, this could put Azure’s SQL Database to the back end of the long line.

Why SQL Server Over Azure SQL Database?

The answer is simple. It is so we can run existing applications on Azure that are using SQL Server features not found in Azure SQL Database.

However, if the goal is just to migrate existing SQL Server based applications to the cloud, then we might be missing some of the major reasons why we have the cloud in the first place. And we dont have to go to that direction for this article. People are aware that aside from wanting to get away with dealing with the nuances of an in-premise infra, to some, going for the cloud simply means cloud scale load. And for those of us who still wanted an SQL way of dealing with data in the cloud, Azure SQL Database, can definitely handle any cloud scale load thrown its way as it has inherent mechanisms, like FEDERATION, to address issues related to handling load on a massive scale. But there are also a lot of things to be desired that are still not available in Azure SQL Database. Eventually, Azure SQL Database will mature and we will not have to deal with the missing pieces but some of us can’t wait any longer.

So the questions now are the challenges one need to face to use SQL Server in Azure instead of Azure SQL Database, specifically one that will allow cloud scaling. The same questions have been bugging my company so we can transition our applications and services over to the cloud (Azure specifically) without throwing away so much of the current SQL Server related assets that we have. The primary reason for this is that, aside from adopting the cloud, we are still maintaining our in-premise code base until when there are no more clients depending on it. But I don’t see our existing clients going in that direction so soon. The way I see it, the challenge would be our ability to maintain and run two code-bases. One for in-premise, and the other for the cloud. Eventually, the code for the cloud will become the code for in-premise.

So to rationalize all these based on my own and company’s scenario, it seems running SQL Server on an Azure VM will be the way to go. The challenge will be to use SQL Server on Azure as storage and data management engine of choice for a multi-client SAAS (software-as-a-service) type of system. I have identified a few major issues that I need to deal with from the get go:

(1) how to handle horizontal scaling

How would I shard? I would think a single database per client would be easier. Given that SQL Server wasn’t designed to be automatically elastic, I would probably need to develop a mechanism that would create a new database on some SQL Server instance on an Azure VM as soon as a new client registers. These VMs can be on the same or different Azure instance. The SQL SErver instance on Azure VM can be pre-determined, provisioned and should be running on standby and wait for new registrants.

(2) full text

Fulltext indexing is currently not available in Azure SQL Database. To provide a similar service, one needs to look at 3rd party search solutions like Lucene.NET. The difference between the fulltext indexing in SQL Server and that of Lucene is that, the former allows you to mix things up in your TSQL queries in the same TSQL statements. By using SQL Server on an Azure VM, one can have fulltext indexing and searching. However, SQL Server fulltext can only be configured on a per database basis. To do fulltext across multiple databases across multiple SQL Server instances on various Azure VM instances, one has to probably make SQL SErver fulltext work like that of Lucene.NET. To probably do that, one has to treat each client database as a document.

(3) reporting services

The simplest implementation for a multi-client setup I can think of right now is to include the login information in the report’s parameter and hi-jack the info to get the right database to query. But I am not sure yet if this is a good approach security wise. The other option is to use the report RDL control and access the RDL files from the apps.

(4) High Availability & Backup

Going for SQL Server on an Azure VM means you have to deal with High Availability issues like you do with in-premise implementations. Though Microsoft made sure that the Azure instances one is using all have redundant instances, there were downtimes that you don’t have control that you have to deal with. The good thing with using SQL Server on an Azure VM is that, full database backups like those you do in in-premise setup are available.

Given the above challenges, I would definitely go for SQL Server in an Azure VM over Azure SQL Database. It is probably much easier to fashion out a custom sharding mechanism or horizontal scaler, a lucene.NET like implementation using SQL Server’s fulltext than deal with a lot of Azure SQL Database missing pieces. By also picking SQL Server as the storage of choice, I could probably port the entire system with negligible changes to other cloud infrastructure provider. Should a client decides to run everything off the cloud, then that wouldn’t be a problem either.

It seems to me now that Microsoft has provided me a potent option with having SQL Server on an Azure VM. I don’t have to wait until they get everything I need in Azure SQL Database. Surely, my personal and company’s needs  would make a case for SQL Server in Azure.

**************************************************

f677754256
Maira-ira Beach
Photographed in Pagudpud, Ilocos Norte

**************************************************
Toto Gamboa is a consultant specializing on databases, Microsoft SQL Server and software development operating in the Philippines. He is currently a member and one of the leaders of Philippine SQL Server Users Group, a Professional Association for SQL Server (PASS) chapter and is one of Microsoft’s MVP for SQL Server in the Philippines. You may reach him by sending an email to totogamboa@gmail.com

Building, Deploying Databases Incrementally On Multiple Servers

SSDT + DACFx = Cool Database Version Control

 

For years, building and deploying a database to multiple clients was all pain and a lot of work. I have relied mostly in hand-coded solution to keep my sanity intact. The goal was simply to synchronize all databases deployed to various clients from a single schema which our company maintains. The goal was very simple, yet it was easier said than done. Often, one or more of these databases gets less and less updates over time which made things difficult to track. Some of these are of our own doing and some due to the inherent nature of our business as we don’t necessarily and automatically update databases as we wish. Clients need to sign up and approve of the updates and upgrades to the system and the databases.

In our case, there are tens of thousands of stored procedures to maintain and hundreds of tables, indexes and other SQL objects to track.These objects should be also in synched with the applications as version 1.0 of apps have different sets of Sql Server objects than application version 2.00. And we maintain up to 3 different versions hoping the clients would sign-up to upgrade to the latest faster than we solve all these complications. Some of these objects evolved from more than a decade of operation. In the end, databases get out of synched from the model that we have at hand as versions upon versions in various clients change. Today, the challenges are even greater. As we transition to a more faster and smaller roll out of updates (people refer nowadays to a more ‘agile’ way of evolving applications), new features, changes, fixes, deprecations, or whatever is on the plate increase the chances of our database out there getting out of synched.

A Viable Solution : SSDT and DACFx

Now enter the realm of SSDT and DACFx, it has provided us a free and automated way of doing what yesterday was a major headache for most of us. Some semblance of these workings started with SQL Server 2005 SP4 but I have waited for SSDT to have gone out to fully embrace the solution. Now there is that ‘silver bullet’ that would allow us to slay a werewolf of database proportions. So what are these tools to build databases incrementally and have all the increments deployed in an automatic manner (yes, it only involves a single command with a few parameters to dabble with!)?

The Tools

All we need now are the following:

  • SQL Server Data Tools (SSDT). A free database development tool that will allow us to design our databases and package it for deployment from version 1.0 to a series of minor and major updates to its schema. It is better to check out what it is if you still have not heard of SSDT. Coupled with DACFx, you can build Data Tier Applications (DAC).
  • DACFx. SQL Server Data-Tier Application Framework which is a set of tools, API’s and services intended to simplify our development, deployment and management of SQL Server schemas. The framework specifically allows us to build Data Tier Applications (DAC).
  • SQLPackage.Exe, a DAC tool use to deal with DACPACs and a lot more.

How These Things Work

Here, I would simply present a typical scenario everyone can relate and see how these tools work. Our goal is to deploy TESTDB v1.0 and update it with TESTDB v1.1. Using SSDT, I will have to author my TESTDB v1.0 with probably Table1 and StoredProcedure1 in it. Then I have to get the database snapshot.

snapshotdb

This forces SSDT to generate a DACPAC file containing metadata and instructions on how my TESTDB v1.0 is like when it is deployed. For this article I got the file TESTDB_20130301_11-32-20.dacpac under the snapshots folder of the project. The file naming gives me instant version management of some sort and that is neat. Using SQLPackage.Exe, I could go to any client’s SQL Server, fire up CMD and run TESTDB_20130301_11-32-20.dacpac.

sqlpackage

SQLPackage.Exe will proceed to deploy TESTDB v1.0 on any instance of SQL Server that I want. I basically only need to know the proper connection string that I can use for the process. But of course there is more to that, but I leave all that outside of this article for now as I wanted to show how easy it is all these can be achieved.

At this point in time, I got the following:

  • A simple, stand-alone dacpac file that I can use to deploy TESTDB v1.0 on more servers. I can have others or the clients do that for me. No more of those hand-coded scripts that could easily get tampered or accidentally modified for me. I can keep this file knowing this sets up the schema version 1.0 for my TESTDB. And I can use it weeks, months or even years after without worrying to recall what it needs to do.
  • A ‘testdb1’ database sitting on a new server
  • and a head-ache free afternoon

At some point in time, I probably might need to upgrade TESTDB with new tables and stored procs. We can refer to this as TESTDB v1.1. I can do that with SSDT by opening the TESTDB project, add the new objects and have a snapshot once again of the latest version of TESTDB to get a new DACPAC associated to TESTDB’s new version/updates. I just have to run the new DACPAC file on new or existing servers. This is where these tools get interesting and there are possibly varying outcomes depending on what SQLPackage.Exe encounters:

  • if testdb1 exists in the target server, SQLPackage.EXE will update the schema as necessary. It will check existing objects, leave it as is or update it as necessary and add new ones.  It will do this without DATA LOSS. Yes, no DATA LOSS.
  • if testdb1 does not exist in the target server, it will simply create testdb1. Objects from previous versions (e.g. TESTDB v1.0) will be recreated up to the latest version (which is TESTDB v1.1)
  • A possibility of DATA LOSS is also detected during this process. For example, if a certain column from v1.0 was changed from VARCHAR to INT in v1.1, SQLPackage.Exe prompts the user and cancels the entire update. THIS prevents DATA LOSS.
  • A version drift is also detected during this process. A user is prompted if a modification of the database schema outside this formal and controlled process of deployment is present. Once detected, it also cancels the entire update. THIS prevents DATA LOSS and structural modifications on databases that were tweaked outside of this process for some valid reasons.

With all these free and simple yet very effective deployment tools, I’d have my weekends for more non-tech activities. Oh well, I should have gone fishing but these tools tell me to tell others so they can go fishing too. 🙂

**************************************************

IMG_0468
𝗩𝘂đ—č𝗳𝘂𝗿-đ—Żđ—¶đ—čđ—čđ—Čđ—± đ—Ąđ˜‚đ˜đ—”đ—źđ˜đ—°đ—”, 𝘚đ˜Șđ˜”đ˜”đ˜ą đ˜°đ˜Šđ˜Żđ˜°đ˜€đ˜©đ˜­đ˜ąđ˜źđ˜ș𝘮
Photographed in Mountain Province

**************************************************
Toto Gamboa is a consultant specializing on databases, Microsoft SQL Server and software development operating in the Philippines. He is currently a member and one of the leaders of Philippine SQL Server Users Group, a Professional Association for SQL Server (PASS) chapter and is one of Microsoft’s MVP for SQL Server in the Philippines. You may reach him by sending an email to totogamboa@gmail.com

Database Server Provisioning: Partitions & Logical Drives

People overlook the importance of coming up with the right number and correct combination of storage partitions when setting up SQL Server server. More often, people just create partitions equal to the number of disks or arrays they have on the server. For example:

Array 1, Raid 1, Disks 1-2, Partition 1 – Drive C: (OS)
Array 2, Raid 1, Disks 3-4, Partition 2 – Drive D: (Data)
Array 3, Raid 1, Disks 5-6, Partition 3 – Drive E: (Log)
Array 4, Raid 1, Disks 7-8, Partition 4 – Drive F: (TempDB)

This works perfectly however if you are lucky enough to have a server capable of having the right amount of partitions that would host your intended files. But most of us live in the world where funds are scarce that we have to deal with limited amount of resources like server hard drives.

So if you are one of those who weren’t lucky enough to be given the right amount of funds for the right amount of server hard drives, creating storage partitions like as if you have the right amount of hard drives will save you some future work when you finally had your server with the right amount of hard drives.

For example, if you only have 2 drives, configured as RAID 1, instead of just creating a single partition (drive C:) to house every file you have in the server, you may want to create the partitions  in a manner that would mimic your best/ideal configuration even though you have a limited amount of resource underneath. For example:

Array 1, Raid 1, Disks 1-2, Partition 1 – Drive C: (OS)
Array 1, Raid 1, Disks 1-2, Partition 2 – Drive D: (Data)
Array 1, Raid 1, Disks 1-2, Partition 3 – Drive E: (Log)
Array 1, Raid 1, Disks 1-2, Partition 4 – Drive F: (TempDB)

When bosses grant your wishes to have the server configuration you desire, rest assured that you will have lesser tweaks to do when migrating to the newer server. For example, on the newer server, you will have the following:

Array 1, Raid 1, Disks 1-2, Partition 1 – Drive C: (OS)
Array 2, Raid 1+0, Disks 3-8, Partition 2 – Drive D: (Data)
Array 3, Raid 1+0, Disks 9-12, Partition 3 – Drive E: (Log)
Array 4, Raid 1+0, Disks 13-16, Partition 4 – Drive F: (TempDB)

By pre-configuring your servers with the ideal amount of partitions regardless of the underlying number of physical drives / arrays, you simply just restore a database backup on the new server and this saves you from reconfiguring file locations to take advantage of what is in the new server.

Caution: The examples above aren’t suggestive of best practice in terms of server/database performance but are mere simple examples to drive a point. Your ideal partition configuration may be more exotic or elaborate than the samples presented as based from your unique circumstances.

**************************************************

IMG_5960
Philipine Duck, Anas luzonica (endemic)
Photographed in Candaba, Pampanga

**************************************************
Toto Gamboa is a consultant specializing on databases, Microsoft SQL Server and software development operating in the Philippines. He is currently a member and one of the leaders of Philippine SQL Server Users Group, a Professional Association for SQL Server (PASS) chapter and is one of Microsoft’s MVP for SQL Server in the Philippines. You may reach him by sending an email to totogamboa@gmail.com

Database Server Provisioning: How Many Disks Do I Need For RAID 10?

I have a new server box intended as a dedicated SQL Server 2012 server. With the new box, I am now faced with a question on what should be the optimal number of physical disk I would configure on each RAID 10 array that I would set. Having the answer to the question would greatly help me in deciding as to how many partitions I will create to support the separation of SQL Server’s data, transaction log and tempdb.

To come up with an answer, I conducted some simple SQLIO test to determine which array configuration is best suited for the read-query-intensive application that I am about to set up. The test should be able to answer the following:

  • Is a 6-disk RAID 1+0 array better than a 4-disk RAID 1+0 array?
  • Is an 8-disk RAID 1+0 array better than a 6-disk RAID 1+0 array?

The Test

I carefully devised the following tests:

  • Run an SQLIO READ test using a 10GB file on a RAID 1+0 array with either 4, 6, 8 and 10 disks.
  • Run an SQLIO WRITE test using the same configurations as the READ test.
  • Each SQLIO test case should be run 5 times and get the average from the results.
  • Prior to each test, I have to destroy the RAID configuration using the server’s RAID utilities, set it up with full initialization, create and format the partition.

The Results

The following are numbers produced by SQLIO:

READ Test

Disks

 IOs/sec

 MBs/sec

Latency (ms)

Min

Ave

Max

4

1,323.70

10.34

0

23

544

6

1,580.82

12.35

0

19

562

8

1,817.43

14.19

0

17

498

10

1,878.32

14.37

0

16

495


WRITE Test

Disks

 IOs/sec

 MBs/sec

Latency (ms)

Min

Ave

Max

4

715.82

5.59

0

44

152

8

1,197.28

9.35

0

26

199

My Conclusion

  • I conducted the READ test first which influenced my decision to just instead run the full WRITE test on a 4 and 8 disk array.
  • After the READ test, it seems to suggest that having 4 and 6 disks in an array don’t seem to have a significant difference in terms of performance between the two.
  • Going beyond 8 disks, performance wouldn’t improve much
  • With only 14 physical disks to utilize, I ended up with 2 RAID 1+0 arrays. One with 4 disks and the other with 8 disks.

Additional Information

  • I only run each test once on a 6 and 10 disk arrays so I have not included the figures in the results.
  • I tested the 10-disk array configuration just to confirm if there will be significant performance gain.
  • Disks used were the ordinary 250GB SATA drives.

Caution

The results presented here do not, in any way, represent a recommendation. It merely shows based on my specific configuration. It may differ in other server configurations and I have no way to know the possible results of other test using a different server configuration. It is therefore highly recommended to conduct your own testing when you have a new server.

**************************************************
IMG_0561
Mountain Province, Philippines

**************************************************

Toto Gamboa is a consultant specializing on databases, Microsoft SQL Server and software development operating in the Philippines. He is currently a member and one of the leaders of Philippine SQL Server Users Group, a Professional Association for SQL Server (PASS) chapter and is one of Microsoft’s MVP for SQL Server in the Philippines. You may reach him by sending an email to totogamboa@gmail.com

Database Server Provisioning: What’s Best Practice, What’s Real

Many months ago I have been tasked to recommend an upgrade to the server hardware of a medium-sized mission-critical OLTP dedicated database server running on SQL Server 2005. The client has gone through the regular 5-year maintenance/upgrading cycle and has added more applications that were not previously foreseen putting added stress on the database server. The combined size of the databases is rather small, roughly around 50GB that has accumulated for almost a decade. Around 15GB of which are highly active. Furthermore, the server is catering roughly to around 6000 active users with 10% of which doing complex, highly relational computation and aggregation. Most of the processing are complex querying on large data sets and which are largely read-intensive.

Though the database server is holding up, it was now considered near-end-of-life. Server utilization has increased to almost 60%-80% during regular working hours and on occasional peak, performance would degrade noticeably.

The client’s upgrading directive is for me to find out the best possible configuration for the least amount of money. And in the segment that I am active with, it is almost impossible to get a good idea on what is best out there based on an actual production setup and experience. With the absence of such information and with the lack of experience (as I do NOT get to do upgrading on a monthly or regular basis, who does anyway?), the most logical thing to do is check on best practices articles/guides that are vastly available over the Internet.

What is Best Practice?

What’s interesting about most of these articles and guides by Microsoft and various 3rd party experts is that, these are anchored on the premise of one having an ideal environment given ideal or best possible circumstances. And commonly, you will see that these best practices are presented in a way not based on various limitations but rather to almost always take advantage on what best/top solution is available out there. One would really have to work his way through these articles to get a balanced solution that is acceptable to a certain level if his circumstances are far from being ideal. But there is nothing wrong with these … and this is exactly why these are called ‘best practices’. No one would definitely be able to see all the variables and factors that can affect the outcome of what your ideal setup will become.

What is Real?

Out there however, reality bites. We all have to work with limits. Out there we are given constraints like having limited budget, hardware vendors carry only certain server models (some needed devices need to be imported from another country) and often we all don’t have the luxury of playing around the real stuff until you get one paid and delivered it to you. And there are lots of other unique and strange variables that we have to hurdle.

It is more than necessary for one to do some heavy sleuthing to come up with options to the client without being extravagant and not being unreasonably idealistic, and that one of these options could become acceptable and still be adhering to most, if not all, of the ideals that are staple in these best practices.

The Recommendations

So I came up with the following:

  • Recommendation #1: Upgrade 2005 to SQL Server 2012. Though SQL Server 2005 can provide for what the client requires today, the client’s thirst of technological solutions now seem to be picking up the pace, they seem bent on evolving their systems rapidly. This kind of upgrade though needs to be justified — for one, SQL Server 2012, for a 12-core server doesn’t come cheap.  So the justification here is that SQL Server 2005 SP2 has already reached end-of-life in 2010 while SP3’s life is on lease. Futhermore, the boatload of new features and performance enhancements in 2012 can surely can justify the upgrade. Though initially, the goal is to just simply move to 2012, once done, the direction is to take advantage of the bevy of new features available in 2012. FILETABLE and the new FULLTEXT SEARCHING capabilities come to mind rather quick. I’d be able to make a case to upgrade just for these two features alone.
  • Recommendation #2: I decided to stick with Windows Server 2008 R2 mainly because the client is not ready for an upgrade of its Active Directory to 2012 across all servers. Two years ago, we upgraded the OS/Active Directory from Windows 2003 and every server moved to 2008 R2. Running in 2008 R2 has been flawless for the past years. so I believe Windows 2012 at this point is not necessary. 2008 R2 is very much potent and very stable. I would probably recommend to them to skip fully Windows 2012 and just wait for the next/future releases of this server OS.
  • Recommendation #3: Mid-level DAS server that can handle more than 8 bays without special kits. IBM allowed me to have a server that can have up to 24. Old servers that were acquired only had 8 bays for DAS HDDs. SANs or Servers allowing more bays were then beyond reach for this client, money-wise — well, this was until IBM introduced certain server models that are very much affordable. With these new servers, these will allow me to configure more RAID arrays.
  • Recommendation #4: From a 2008 generation 2 x 4 core, client is moving to a two socket 6-core/socket server having a total of 12-core current generation technology with faster speed, hyper threading and double the cache. The 8-core per socket is beyond reach as this will also increase the spending on SQL Server 2012 which can now only be licensed on a per core basis . Besides, there isn’t much CPU issues with the existing setup. Having this new processing power could provide ample room for growth in the coming years.
  • Recommendation #5: From 16GB of RAM, now it would be 32GB. A fairly good portion of the database would fit into this space. As RAM gets cheaper in the coming years, we could probably have this server @ 64GB and make RAM less of an issue. Besides the current database is still small. But having RAM double for now will tremendously help and would probably cover some unforeseen growth in the coming years. There is a big chance that newer apps will utilize this server.
  • Recommendation #6: From the existing 6 x 7krpm SATA spindles fashioned in a single RAID controller (configured RAID 1 and RAID 5), I recommended 14 x 7krpm SATA drives on 2 RAID controllers. This is where I have done a lot of thinking, weighing the client’s directive to spend the least amount of money while still achieving some of the ideals of what should be is a best practice. After all, this is the most critical and the slowest of all the system’s components. Going for the considerably performant 15krpm SAS drives would have quadrupled the storage cost. My notion here is that, with me having my hand on Recommendation #3, I would still have gained substantial disk IO performance gain over the existing setup if I’d be able to spread at least three Raid 1+0 Arrays and will have 2 RAID controllers sharing the load. This way, I would be able to separate DATA, LOG and TEMPDB. I understand, the best practice out there is to have at least 3 Arrays so SQL Server’s data, log and tempdb can each be placed on separate arrays — that is what people are evangelizing, or so it seems.

What Came Out of My Recommendations

Now that the client has approved all my recommendations and the components for upgrading were delivered by various vendors, I would like to show what came out of it.

All recommendations were as expected other than Recommendation #6 where it turned out to be interesting and has shown another example of when best practice gets tweaked down to a level that is acceptable and also when harsh reality bites.  The 2 RAID controller setup is not possible (the vendor claimed). The first RAID controller (built-in) is getting disabled by the second RAID controller (add-on). I was fed a wrong information from the start by another vendor and only got to confirm that it is not going to work after the servers were delivered and configured. Unfortunately, the client ordered the server from another vendor so they cannot hold the vendor responsible for the wrong information. Client cannot afford to wait for another 30-day delivery so it has to deal with a single RAID controller setup.

So why 14? Why not 24?

First, given an extra push, the client could have gone for 24. However, what concerned me that time was that, I can’t be certain of the performance gain I will have with having the 24 — well, NOT until the client has paid and the server is delivered. This is one of the realities I have to face, server hardware vendors around here aren’t known  or say familiar with these realities, and they can’t give you figures on how their machines would fare given the load you expect. So I recommended a safe setup.  I came up with 14 thinking that if performance gain over the existing setup comes up short in one of the unforeseen rounds of expansion, I still would be able to tell the client to get the faster 15krpm SAS drives to fill the remaining bays without decommissioning some few hard disks. And now, after playing with the actual machine, my caution seem to hit the mark.

Upon testing using SQLIO, I seem to always get optimal performance with an array of 8 spindles. Increasing the spindles does not improve things much further. I am not sure if this is true with a different server brand/specs/configuration. So now I ended up with the following:

Array 1, RAID 1, 2xDisks, Single Partition (OS, MSSQL)
Array 2, RAID 1+0, 8xDisks, Single Partition (DATA)
Array 3, RAID 1+0, 4xDisks, Two Partition (LOG @ Partition 1, TEMPDB @ Partition 2)

The above setup, far from being ideal, seem to have gained me more than double the I/O performance than that of the existing just by running SQLIO. Testing further and having able to spread SQL Server’s data, log and tempdb across arrays further improved the system’s performance way better than I expected. I’d probably have another blog discussing in length how I came down with this decision. Some SQLIO numbers would definitely shed light on this.

In the end, I believe I was able to satisfy what was required of me without being unreasonably idealistic.

**************************************************

P1122746
Bataan, Philippines

**************************************************
Toto Gamboa is a consultant specializing on databases, Microsoft SQL Server and software development operating in the Philippines. He is currently a member and one of the leaders of Philippine SQL Server Users Group, a Professional Association for SQL Server (PASS) chapter and is one of Microsoft’s MVP for SQL Server in the Philippines. You may reach him by sending an email to totogamboa@gmail.com

Database Server Provisioning: Items Of Concern Using Commodity Servers

There were series of requests for info directed at me lately from clients and acquaintances on how to best deal with their need to come up with the right server specification for their SQL Server based applications.

Upon some substantial researching, I came to a conclusion that preparing a new environment for our SQL Server isn’t an exact science. It almost always followed by lots of questions where answers are hard to come by easily. The complexity of coming up with a sound specification is heightened by the lack or low frequency of our exposure to the task at hand. I mean, how many of us are tasked to come up with a new server for some new SQL Server based application on a daily or even weekly basis?

So I thought of compiling what I have stumbled upon the Web, the online fora, and from interactions with various hardware vendors. This compilation of information would probably serve as a starting point to coming up with sound configuration of a machine that would run our SQL Server and would also serve as my guide and reminder of the things that I need to be concerned of.

Today, we have so many options to choose from. But I only need to focus on those that will affect SQL Server’s performance using commodity servers. So what is in the market today that needs our attention?

CAUTION: I’ll only assume that the server will be configured as dedicated only to SQL Server. I won’t cover virtualized scenarios here.

Processing  Unit

  • 64bit. Going 64bit is the way to go in most of us. This should allow us  to go beyond the memory addressing limits of a 32bit system. SQL Server need not task itself too with the extra memory handling posed by 32bit.
  • Speed. Faster core speed would generally give us better performance. So I’d say buy the fastest per core speed processing unit your money can have.
  • Cache. Larger cache are better. Cache allows the CPU to keep frequently used data nearer and faster to the processing unit. The more it can accommodate, the better.
  • Thread. Today’s processing units offer flavors that have multi-threading in the cores. Extra threads allow us to have more ‘processors’ (logical) than what we actually have (physical) to work on a given task. SQL Server takes advantage of this. In fact, the maker of SQL Server states that the compute capacity of the 2 logical processors in the hyperthreaded core is greater than the compute capacity of the same core with hyperthreading disabled. So go for the processors with threading.
  • Cores. Today’s staple are from commodity servers have 4, 6, 8 cores per socket. Go for the highest numbers your money can buy. Be aware though that starting with SQL Server 2012, the number of cores that you have comes with licensing implications, so you need to check on these concerns too.
  • Sockets. Today’s commodity servers have 1, 2 and 4 sockets. Though your application may need only 1 single processor, it is sensible to factor in growth and future expansion options. You may opt for a server with spare sockets for growth. However, do take note that you may have difficulties acquiring additional processors for your server if you decide to expand after several years. It also make sense that if you have enough money to fill the spare sockets with the processors you need. Be aware though that starting with SQL Server 2012, the number of sockets that you have comes with licensing implications, so you need to check on these concerns too.

Memory

  • Size. More is better. Period! 🙂
  • Density. Go for the denser memory modules so don’t quickly fill the limited number of memory slots meant for expansion. E.g., If your server has 4 memory slots and can handle up to 32GB max and you need 16GB of memory, it is better to get two 8GB modules than four 4GB modules so you still have two slots for expansion. Expanding to 32GB will render your 4GB modules unusable.

Storage and Related Components

Probably the slowest components in your server, choosing the right combination of storage and related components can lead you to a better, performant SQL Server database server.

  • Capacity. In SQL Server terms, server storing capacity isn’t disk size. SQL Server has various ways of utilizing storage. You can have multiple file groups across several arrays or disks in multiple enclosures. Size is really dictated by what your application needs. Determine first your size estimates and have enough buffer for growth. If you can expand size in the future without having to shut down and reconfigure your system, that is for me is the most sensible path to take.
  • Disk Size. Individual disk size may or may not affect SQL Server. But do note that size can relatively affect seek speed. Go easy with larger disks. Heads can travel longer in larger capacity drives. So if you are building an array of multiple drives, you may prefer building it with smaller capacity drives.
  • SATA, SAS. To make things simple, SAS drives are faster but are more expensive than SATA drives.
  • SSD. Solid State Drives are the future but it is here now. Though I have not personally tried this yet in a production server, vendors and customer testimonies tout SSDs are superbly faster in a lot of cases and increases significantly the server’s performance. There are still reliability issues that I have read but things are getting better each day. I’d probably want to utilize SSDs for my tempdb.
  • RAID. There should be more than enough information on RAID and SQL Server on the web. It is one piece of component an SQL Server DBA need to be concerned of and familiarity is the key. There are plenty of storage/server options that allows you to have multiple RAID controllers and build various array types. It is just impossible to have it covered here in this article. I’d say it is always better to check with your vendor. However, I have the belief that in SQL Server, the more RAID arrays you have spread across several controllers using the fastest, most reliable and less dense disk drives are the way to go. This way, you can freely spread load such as separating transaction logs, tempdb, data files to their own arrays. You may check on other blogs on this matter.

Ethernet Ports

  • Bandwidth. Servers with 1Gbps ports are now a norm but I have seen some server models with 10Gbps ports. However, you can only take advantage of these high volume ports if your network (cabling, switches, etc) supports it. I would assume that within the datacenter, or the least within your servers’ rack/enclosure that you have the fastest links. SQL Server will take advantage of whatever bandwidth you have in your box.

Blades, Storage Area Networks, NAS and External Storage options

These types of options present a bit different set of concerns to an SQL Server DBA as compared to standard/commodity servers.  But the nuances of the common components I have mentioned above can still be applied on these options. I would love to expand this article to cover these options. This is all for now!

**************************************************

IMG_0103_whiskered_treeswift
Whiskered Tree-Swift, 𝘏𝘩𝘼đ˜Șđ˜±đ˜łđ˜°đ˜€đ˜Żđ˜Š đ˜€đ˜°đ˜źđ˜ąđ˜”đ˜ą (female)
Photographed in Mt. Palay Palay, Cavite

**************************************************
Toto Gamboa is a consultant specializing on databases, Microsoft SQL Server and software development operating in the Philippines. He is currently a member and one of the leaders of Philippine SQL Server Users Group, a Professional Association for SQL Server (PASS) chapter and is one of Microsoft’s MVP for SQL Server in the Philippines. You may reach him by sending an email to totogamboa@gmail.com

Cleaning Up Old Database Dirt with SQL Server 2012

I am about to venture into transitioning a 10 year old database that started with SQL Server 2000 which has surely some (a lot?) accumulated  ‘dirt’ in it that started way back in 2001. In the past (am sure it still is with 2012 edition), SQL Server has been very forgiving for all the sins of its handlers (DBAs, developers, users, & pretenders). It had allowed a lot of minor issues to creep into the database without constantly complaining and compromising its ability to run in production. In a perfect world, those issues could have been eradicated a long time ago.

The ‘dirt’ has accumulated as SQL Server hasn’t been pestering people that there is some cleaning up to do. In the real world, that is acceptable. We just can’t stop the show after someone spilt water on the stage floor. After 10 years,with better tools and ways to clean up dirt, it would be a worthwhile exercise so we can navigate easily through the challenges of modern times.

Enter SQL Server 2012, it packs with efficient and reliable tools to accomplish this tasks of cleaning things up. To start on the cleaning process though I need to see the mostly unseen ‘dirt’. SSDT comes in handy for this job.

Using SQL Server Data Tools (SSDT) As Dirt Probe

Here is how to begin identifying what needs to be cleaned.

Then allow SSDT to scan your database for ‘dirt’. I am actually referring to ‘errors’ and ‘warnings’!

After scanning, go to the Error List tab and VOILA!!! now you see the ‘dirt’.

Armed with a long ugly list, you can now hire a cleaner to scrub off each and every dirt you have identified.

That is all for now for the cleaning up!

**************************************************
Toto Gamboa is a consultant specializing on databases, Microsoft SQL Server and software development operating in the Philippines. He is currently a member and one of the leaders of Philippine SQL Server Users Group, a Professional Association for SQL Server (PASS) chapter and is one of Microsoft’s MVP for SQL Server in the Philippines. You may reach him by sending an email to totogamboa@gmail.com

SQL Server Data Tools (SSDT) Lengthens Life of SQL Server 2005!

The arrival of SQL Server 2012 last week sparked a semblance of positivity among people who have been using version 2005 who also held off on upgrading and skipping two SQL Server 2008 releases. For a lot of folks like me though, SQL Server 2012 gave a lot of boost and now lengthens the lifespan of 2005 databases. Not that 2005 is dying, but the tool lend more than a helping hand to a lot of 2005’s inherent goodies that some might even consider sticking with the version a little bit longer. Enter SQL Server Data Tools (SSDT), formerly aliased as “Juneau“, is the newest tool Microsoft has provided for its legion of SQL Server professionals.

Developing for SQL Server 2005

Though SQL Server 2012 has a lot to offer to anyone, SQL Server 2005 is very much a formidable data platform that would not go away anytime soon. My company still supports a lot of clients running SQL Server 2005 that might not be considering an upgrade anytime soon. And as a matter of strategy, we still very much believe that we have to pick the most common denominator among our target space. So here, just in time, SSDT gets to be initiated with a new project that would at least support SQL Server 2005.

Working Right Away

The last time I use SSDT was in SQL Server 2012 CTP3 and utilized it for some presentation/demo. I downloaded it earlier today and wasted no time and have it installed smoothly. No hiccups. It gets me so excited to play around a new tool with something that won’t be laid only to be forgotten after some tinkering. This time, SSDT gets to do real work right away with a small database.

Here are some of goodies SQL Server 2005 users from would get from a production release of SQL Server Data Tools:

  • Stand Alone SSDT (No SQL Server 2012, No Visual Studio 2010 needed). I was glad Microsoft had a stand-alone distro for this tool. The last time I tried it, it was blended with SQL Server 2012 CTP3. Now, you need not have SQL Server 2012 and Visual Studio 2010. I mean, you don’t need to have an installation of SQL Server 2012 and Visual Studio 2010 to use SSDT. Without the two, your SSDT off the web installer installs the necessary SQL Server 2012 and Visual Studio 2010 / SP1 components so it can run standalone. I had installed on a bare Windows 7 machine with no SQL Server nor Visual Studio components prior and it just went with no hassles.
  • SSDT Supports SQL Server 2005 as Target Platform. As I have tried before, SSDT fully supports development on SQL Server 2005 (lowest version allowed). You get all the bells and whistles of SSDT even if your target deployment platform is only SQL Server 2005. SSDT tunes itself to support 2005 so you are assured that every object you have in your SSDT project is fully supported by 2005’s engine and syntax.
  • TSQL Code Analysis On Build. Most common coding malpractices are easily detected. There are around 14 coding rules that can be detected for any violations. It is all up to you to weed the problems as they appear as warnings (can be set to appear as errors if you wanted to tidy up your TSQL code.
  • Catches TSQL Design/Compile Time Errors. For users who were contented to use just the SQL Server Management Studio, catching SQL compile time error isn’t a routine. Often, code errors are resolved only when executed against a database. For example, this following statement “SELECT col1, col2 FROM table1 ORDER BY col1, col3”, when parse or included in a stored procedure, produces no error upon the creation of the stored procedure. With SSDT, errors an be resolved at design/compile time.
  • Import SQL Server 2005 databases. This tool comes very handy as I attempt to do some cleaning up of some old 2005 databases and probably copy some old stuff for the new project I am doing. I was able to import 2005 databases into SSDT projects with the goal of running some code analysis. As expected, I was able to detect a lot of areas that arent flagged by the old platform as potential cause of problems (e.g. unresolved references, data typing inconsistencies, possible points of data loses, etc). I was also able to copy easily old structures and stored procedures and have it copied to a new SSDT project I am working on.
  • Offline Development. Unless I need to get some data in and out of a database while developing my SQL Server objects, I can completely do my development without needing to connect to a development or production database.
  • It is FREE! Thank you Microsoft!

My initial feedback on SSDT is that it is a very helpful and very promising tool. For the first release, there is so much goodies that can alleviate some tasks that used to be very tedious before. For the few hours of tinkering, I was able to fashion out a database with ease.

There is a lot more to discover from SSDT. So far, only a few hours of playing time was spent and I felt my productivity has been tremendously boosted.

Indeed, SQL Server 2012 is now the flagship product of Microsoft when it comes to handling data … but with SSDT’s help, SQL Server 2005 is here to stay for a bit more.

For more information about SSDT, check out the links below:

 

The Philippine Eagle-Owl. The largest of all owls endemic to the Philippines. It stands almost 20 inches and preys on small mammals like rodents and snakes.