Database Server Provisioning: How Many Disks Do I Need For RAID 10?

I have a new server box intended as a dedicated SQL Server 2012 server. With the new box, I am now faced with a question on what should be the optimal number of physical disk I would configure on each RAID 10 array that I would set. Having the answer to the question would greatly help me in deciding as to how many partitions I will create to support the separation of SQL Server’s data, transaction log and tempdb.

To come up with an answer, I conducted some simple SQLIO test to determine which array configuration is best suited for the read-query-intensive application that I am about to set up. The test should be able to answer the following:

  • Is a 6-disk RAID 1+0 array better than a 4-disk RAID 1+0 array?
  • Is an 8-disk RAID 1+0 array better than a 6-disk RAID 1+0 array?

The Test

I carefully devised the following tests:

  • Run an SQLIO READ test using a 10GB file on a RAID 1+0 array with either 4, 6, 8 and 10 disks.
  • Run an SQLIO WRITE test using the same configurations as the READ test.
  • Each SQLIO test case should be run 5 times and get the average from the results.
  • Prior to each test, I have to destroy the RAID configuration using the server’s RAID utilities, set it up with full initialization, create and format the partition.

The Results

The following are numbers produced by SQLIO:

READ Test

Disks

 IOs/sec

 MBs/sec

Latency (ms)

Min

Ave

Max

4

1,323.70

10.34

0

23

544

6

1,580.82

12.35

0

19

562

8

1,817.43

14.19

0

17

498

10

1,878.32

14.37

0

16

495


WRITE Test

Disks

 IOs/sec

 MBs/sec

Latency (ms)

Min

Ave

Max

4

715.82

5.59

0

44

152

8

1,197.28

9.35

0

26

199

My Conclusion

  • I conducted the READ test first which influenced my decision to just instead run the full WRITE test on a 4 and 8 disk array.
  • After the READ test, it seems to suggest that having 4 and 6 disks in an array don’t seem to have a significant difference in terms of performance between the two.
  • Going beyond 8 disks, performance wouldn’t improve much
  • With only 14 physical disks to utilize, I ended up with 2 RAID 1+0 arrays. One with 4 disks and the other with 8 disks.

Additional Information

  • I only run each test once on a 6 and 10 disk arrays so I have not included the figures in the results.
  • I tested the 10-disk array configuration just to confirm if there will be significant performance gain.
  • Disks used were the ordinary 250GB SATA drives.

Caution

The results presented here do not, in any way, represent a recommendation. It merely shows based on my specific configuration. It may differ in other server configurations and I have no way to know the possible results of other test using a different server configuration. It is therefore highly recommended to conduct your own testing when you have a new server.

**************************************************
IMG_0561
Mountain Province, Philippines

**************************************************

Toto Gamboa is a consultant specializing on databases, Microsoft SQL Server and software development operating in the Philippines. He is currently a member and one of the leaders of Philippine SQL Server Users Group, a Professional Association for SQL Server (PASS) chapter and is one of Microsoft’s MVP for SQL Server in the Philippines. You may reach him by sending an email to totogamboa@gmail.com

Advertisement

Database Server Provisioning: What’s Best Practice, What’s Real

Many months ago I have been tasked to recommend an upgrade to the server hardware of a medium-sized mission-critical OLTP dedicated database server running on SQL Server 2005. The client has gone through the regular 5-year maintenance/upgrading cycle and has added more applications that were not previously foreseen putting added stress on the database server. The combined size of the databases is rather small, roughly around 50GB that has accumulated for almost a decade. Around 15GB of which are highly active. Furthermore, the server is catering roughly to around 6000 active users with 10% of which doing complex, highly relational computation and aggregation. Most of the processing are complex querying on large data sets and which are largely read-intensive.

Though the database server is holding up, it was now considered near-end-of-life. Server utilization has increased to almost 60%-80% during regular working hours and on occasional peak, performance would degrade noticeably.

The client’s upgrading directive is for me to find out the best possible configuration for the least amount of money. And in the segment that I am active with, it is almost impossible to get a good idea on what is best out there based on an actual production setup and experience. With the absence of such information and with the lack of experience (as I do NOT get to do upgrading on a monthly or regular basis, who does anyway?), the most logical thing to do is check on best practices articles/guides that are vastly available over the Internet.

What is Best Practice?

What’s interesting about most of these articles and guides by Microsoft and various 3rd party experts is that, these are anchored on the premise of one having an ideal environment given ideal or best possible circumstances. And commonly, you will see that these best practices are presented in a way not based on various limitations but rather to almost always take advantage on what best/top solution is available out there. One would really have to work his way through these articles to get a balanced solution that is acceptable to a certain level if his circumstances are far from being ideal. But there is nothing wrong with these … and this is exactly why these are called ‘best practices’. No one would definitely be able to see all the variables and factors that can affect the outcome of what your ideal setup will become.

What is Real?

Out there however, reality bites. We all have to work with limits. Out there we are given constraints like having limited budget, hardware vendors carry only certain server models (some needed devices need to be imported from another country) and often we all don’t have the luxury of playing around the real stuff until you get one paid and delivered it to you. And there are lots of other unique and strange variables that we have to hurdle.

It is more than necessary for one to do some heavy sleuthing to come up with options to the client without being extravagant and not being unreasonably idealistic, and that one of these options could become acceptable and still be adhering to most, if not all, of the ideals that are staple in these best practices.

The Recommendations

So I came up with the following:

  • Recommendation #1: Upgrade 2005 to SQL Server 2012. Though SQL Server 2005 can provide for what the client requires today, the client’s thirst of technological solutions now seem to be picking up the pace, they seem bent on evolving their systems rapidly. This kind of upgrade though needs to be justified — for one, SQL Server 2012, for a 12-core server doesn’t come cheap.  So the justification here is that SQL Server 2005 SP2 has already reached end-of-life in 2010 while SP3’s life is on lease. Futhermore, the boatload of new features and performance enhancements in 2012 can surely can justify the upgrade. Though initially, the goal is to just simply move to 2012, once done, the direction is to take advantage of the bevy of new features available in 2012. FILETABLE and the new FULLTEXT SEARCHING capabilities come to mind rather quick. I’d be able to make a case to upgrade just for these two features alone.
  • Recommendation #2: I decided to stick with Windows Server 2008 R2 mainly because the client is not ready for an upgrade of its Active Directory to 2012 across all servers. Two years ago, we upgraded the OS/Active Directory from Windows 2003 and every server moved to 2008 R2. Running in 2008 R2 has been flawless for the past years. so I believe Windows 2012 at this point is not necessary. 2008 R2 is very much potent and very stable. I would probably recommend to them to skip fully Windows 2012 and just wait for the next/future releases of this server OS.
  • Recommendation #3: Mid-level DAS server that can handle more than 8 bays without special kits. IBM allowed me to have a server that can have up to 24. Old servers that were acquired only had 8 bays for DAS HDDs. SANs or Servers allowing more bays were then beyond reach for this client, money-wise — well, this was until IBM introduced certain server models that are very much affordable. With these new servers, these will allow me to configure more RAID arrays.
  • Recommendation #4: From a 2008 generation 2 x 4 core, client is moving to a two socket 6-core/socket server having a total of 12-core current generation technology with faster speed, hyper threading and double the cache. The 8-core per socket is beyond reach as this will also increase the spending on SQL Server 2012 which can now only be licensed on a per core basis . Besides, there isn’t much CPU issues with the existing setup. Having this new processing power could provide ample room for growth in the coming years.
  • Recommendation #5: From 16GB of RAM, now it would be 32GB. A fairly good portion of the database would fit into this space. As RAM gets cheaper in the coming years, we could probably have this server @ 64GB and make RAM less of an issue. Besides the current database is still small. But having RAM double for now will tremendously help and would probably cover some unforeseen growth in the coming years. There is a big chance that newer apps will utilize this server.
  • Recommendation #6: From the existing 6 x 7krpm SATA spindles fashioned in a single RAID controller (configured RAID 1 and RAID 5), I recommended 14 x 7krpm SATA drives on 2 RAID controllers. This is where I have done a lot of thinking, weighing the client’s directive to spend the least amount of money while still achieving some of the ideals of what should be is a best practice. After all, this is the most critical and the slowest of all the system’s components. Going for the considerably performant 15krpm SAS drives would have quadrupled the storage cost. My notion here is that, with me having my hand on Recommendation #3, I would still have gained substantial disk IO performance gain over the existing setup if I’d be able to spread at least three Raid 1+0 Arrays and will have 2 RAID controllers sharing the load. This way, I would be able to separate DATA, LOG and TEMPDB. I understand, the best practice out there is to have at least 3 Arrays so SQL Server’s data, log and tempdb can each be placed on separate arrays — that is what people are evangelizing, or so it seems.

What Came Out of My Recommendations

Now that the client has approved all my recommendations and the components for upgrading were delivered by various vendors, I would like to show what came out of it.

All recommendations were as expected other than Recommendation #6 where it turned out to be interesting and has shown another example of when best practice gets tweaked down to a level that is acceptable and also when harsh reality bites.  The 2 RAID controller setup is not possible (the vendor claimed). The first RAID controller (built-in) is getting disabled by the second RAID controller (add-on). I was fed a wrong information from the start by another vendor and only got to confirm that it is not going to work after the servers were delivered and configured. Unfortunately, the client ordered the server from another vendor so they cannot hold the vendor responsible for the wrong information. Client cannot afford to wait for another 30-day delivery so it has to deal with a single RAID controller setup.

So why 14? Why not 24?

First, given an extra push, the client could have gone for 24. However, what concerned me that time was that, I can’t be certain of the performance gain I will have with having the 24 — well, NOT until the client has paid and the server is delivered. This is one of the realities I have to face, server hardware vendors around here aren’t known  or say familiar with these realities, and they can’t give you figures on how their machines would fare given the load you expect. So I recommended a safe setup.  I came up with 14 thinking that if performance gain over the existing setup comes up short in one of the unforeseen rounds of expansion, I still would be able to tell the client to get the faster 15krpm SAS drives to fill the remaining bays without decommissioning some few hard disks. And now, after playing with the actual machine, my caution seem to hit the mark.

Upon testing using SQLIO, I seem to always get optimal performance with an array of 8 spindles. Increasing the spindles does not improve things much further. I am not sure if this is true with a different server brand/specs/configuration. So now I ended up with the following:

Array 1, RAID 1, 2xDisks, Single Partition (OS, MSSQL)
Array 2, RAID 1+0, 8xDisks, Single Partition (DATA)
Array 3, RAID 1+0, 4xDisks, Two Partition (LOG @ Partition 1, TEMPDB @ Partition 2)

The above setup, far from being ideal, seem to have gained me more than double the I/O performance than that of the existing just by running SQLIO. Testing further and having able to spread SQL Server’s data, log and tempdb across arrays further improved the system’s performance way better than I expected. I’d probably have another blog discussing in length how I came down with this decision. Some SQLIO numbers would definitely shed light on this.

In the end, I believe I was able to satisfy what was required of me without being unreasonably idealistic.

**************************************************

P1122746
Bataan, Philippines

**************************************************
Toto Gamboa is a consultant specializing on databases, Microsoft SQL Server and software development operating in the Philippines. He is currently a member and one of the leaders of Philippine SQL Server Users Group, a Professional Association for SQL Server (PASS) chapter and is one of Microsoft’s MVP for SQL Server in the Philippines. You may reach him by sending an email to totogamboa@gmail.com

Cleaning Up Old Database Dirt with SQL Server 2012

I am about to venture into transitioning a 10 year old database that started with SQL Server 2000 which has surely some (a lot?) accumulated  ‘dirt’ in it that started way back in 2001. In the past (am sure it still is with 2012 edition), SQL Server has been very forgiving for all the sins of its handlers (DBAs, developers, users, & pretenders). It had allowed a lot of minor issues to creep into the database without constantly complaining and compromising its ability to run in production. In a perfect world, those issues could have been eradicated a long time ago.

The ‘dirt’ has accumulated as SQL Server hasn’t been pestering people that there is some cleaning up to do. In the real world, that is acceptable. We just can’t stop the show after someone spilt water on the stage floor. After 10 years,with better tools and ways to clean up dirt, it would be a worthwhile exercise so we can navigate easily through the challenges of modern times.

Enter SQL Server 2012, it packs with efficient and reliable tools to accomplish this tasks of cleaning things up. To start on the cleaning process though I need to see the mostly unseen ‘dirt’. SSDT comes in handy for this job.

Using SQL Server Data Tools (SSDT) As Dirt Probe

Here is how to begin identifying what needs to be cleaned.

Then allow SSDT to scan your database for ‘dirt’. I am actually referring to ‘errors’ and ‘warnings’!

After scanning, go to the Error List tab and VOILA!!! now you see the ‘dirt’.

Armed with a long ugly list, you can now hire a cleaner to scrub off each and every dirt you have identified.

That is all for now for the cleaning up!

**************************************************
Toto Gamboa is a consultant specializing on databases, Microsoft SQL Server and software development operating in the Philippines. He is currently a member and one of the leaders of Philippine SQL Server Users Group, a Professional Association for SQL Server (PASS) chapter and is one of Microsoft’s MVP for SQL Server in the Philippines. You may reach him by sending an email to totogamboa@gmail.com

SQL Server Data Tools (SSDT) Lengthens Life of SQL Server 2005!

The arrival of SQL Server 2012 last week sparked a semblance of positivity among people who have been using version 2005 who also held off on upgrading and skipping two SQL Server 2008 releases. For a lot of folks like me though, SQL Server 2012 gave a lot of boost and now lengthens the lifespan of 2005 databases. Not that 2005 is dying, but the tool lend more than a helping hand to a lot of 2005’s inherent goodies that some might even consider sticking with the version a little bit longer. Enter SQL Server Data Tools (SSDT), formerly aliased as “Juneau“, is the newest tool Microsoft has provided for its legion of SQL Server professionals.

Developing for SQL Server 2005

Though SQL Server 2012 has a lot to offer to anyone, SQL Server 2005 is very much a formidable data platform that would not go away anytime soon. My company still supports a lot of clients running SQL Server 2005 that might not be considering an upgrade anytime soon. And as a matter of strategy, we still very much believe that we have to pick the most common denominator among our target space. So here, just in time, SSDT gets to be initiated with a new project that would at least support SQL Server 2005.

Working Right Away

The last time I use SSDT was in SQL Server 2012 CTP3 and utilized it for some presentation/demo. I downloaded it earlier today and wasted no time and have it installed smoothly. No hiccups. It gets me so excited to play around a new tool with something that won’t be laid only to be forgotten after some tinkering. This time, SSDT gets to do real work right away with a small database.

Here are some of goodies SQL Server 2005 users from would get from a production release of SQL Server Data Tools:

  • Stand Alone SSDT (No SQL Server 2012, No Visual Studio 2010 needed). I was glad Microsoft had a stand-alone distro for this tool. The last time I tried it, it was blended with SQL Server 2012 CTP3. Now, you need not have SQL Server 2012 and Visual Studio 2010. I mean, you don’t need to have an installation of SQL Server 2012 and Visual Studio 2010 to use SSDT. Without the two, your SSDT off the web installer installs the necessary SQL Server 2012 and Visual Studio 2010 / SP1 components so it can run standalone. I had installed on a bare Windows 7 machine with no SQL Server nor Visual Studio components prior and it just went with no hassles.
  • SSDT Supports SQL Server 2005 as Target Platform. As I have tried before, SSDT fully supports development on SQL Server 2005 (lowest version allowed). You get all the bells and whistles of SSDT even if your target deployment platform is only SQL Server 2005. SSDT tunes itself to support 2005 so you are assured that every object you have in your SSDT project is fully supported by 2005’s engine and syntax.
  • TSQL Code Analysis On Build. Most common coding malpractices are easily detected. There are around 14 coding rules that can be detected for any violations. It is all up to you to weed the problems as they appear as warnings (can be set to appear as errors if you wanted to tidy up your TSQL code.
  • Catches TSQL Design/Compile Time Errors. For users who were contented to use just the SQL Server Management Studio, catching SQL compile time error isn’t a routine. Often, code errors are resolved only when executed against a database. For example, this following statement “SELECT col1, col2 FROM table1 ORDER BY col1, col3”, when parse or included in a stored procedure, produces no error upon the creation of the stored procedure. With SSDT, errors an be resolved at design/compile time.
  • Import SQL Server 2005 databases. This tool comes very handy as I attempt to do some cleaning up of some old 2005 databases and probably copy some old stuff for the new project I am doing. I was able to import 2005 databases into SSDT projects with the goal of running some code analysis. As expected, I was able to detect a lot of areas that arent flagged by the old platform as potential cause of problems (e.g. unresolved references, data typing inconsistencies, possible points of data loses, etc). I was also able to copy easily old structures and stored procedures and have it copied to a new SSDT project I am working on.
  • Offline Development. Unless I need to get some data in and out of a database while developing my SQL Server objects, I can completely do my development without needing to connect to a development or production database.
  • It is FREE! Thank you Microsoft!

My initial feedback on SSDT is that it is a very helpful and very promising tool. For the first release, there is so much goodies that can alleviate some tasks that used to be very tedious before. For the few hours of tinkering, I was able to fashion out a database with ease.

There is a lot more to discover from SSDT. So far, only a few hours of playing time was spent and I felt my productivity has been tremendously boosted.

Indeed, SQL Server 2012 is now the flagship product of Microsoft when it comes to handling data … but with SSDT’s help, SQL Server 2005 is here to stay for a bit more.

For more information about SSDT, check out the links below:

 

The Philippine Eagle-Owl. The largest of all owls endemic to the Philippines. It stands almost 20 inches and preys on small mammals like rodents and snakes.

Some Major Changes to SQL Server Licensing in 2012 Version

Last month, Microsoft has published changes to SQL Server’s licensing model when the 2012 version hits gold. A number of major changes will likely catch everyone’s attention. Here are those that caught my while:

  • Editions now are streamlined to just three (3) flavors:
    • Enterprise
    • Business Intelligence (new)
    • Standard
  • Enterprise Edition will now be licensed per CPU core instead of the usual per CPU Socket (processor).
  • Server + CAL Licensing will only be available in the Standard and Business Intelligence flavors.
  • Core licenses will be sold in Packs of 2 cores.
  • Cost of per core license will be 1/4 the price of current SQL Server 2008R2’s per processor cost license
  • Cost of Server CAL Licensing has increased by more or less 25%
  • To license your server based on cores, Microsoft requires you to buy a minimum of 4 core licenses per CPU socket.

These changes will definitely impact your upgrading plans. To carefully plot your licensing strategies, you can visit SQL Server 2012 Licensing Overview.

Where I See These Changes Have An Impact

  • Scenario 1: Upgrading from Servers with dual cores or less. If you are upgrading to a very old server with 2 CPUs with single core for each, you need to buy 4 core licenses for each CPU. That means you have to buy 8 core licenses. It would be rather logical that when doing this, you might as well upgrade your hardware. Besides, your hardware upgrade seems to be long overdue.
  • Scenario 2: Upgrading from Servers with more than 4 cores. In the past, If one experiences the need to have more CPU power for SQL Server, one only has to buy a new server with as many cores per processor that money can get, reinstall your SQL Server and you are good to go … all without minding licensing issues and cost. With SQL Server 2012, you will be spending more as you go beyond 4 cores. Going 6 cores, you need 2 more core licenses. Going 8 and 10 core, you need to buy additional 4 and 6 core licenses respectively.
  • Scenario 3: Upgrading from Servers with Server + CAL Licensing. There is a 25% increase in cost for CALs.

For those of you who are still using versions 2000 and 2005 and are planning to upgrade but find these 2012 changes unpalatable, you may want to rush upgrading to SQL Server 2008 R2 instead and get stuck in that version’s perks and licensing model before your resellers remove the older SKUs with older licensing models from their sales portfolios and force you to buy SQL Server 2012.

Of course, SQL Server 2012 Express will still be FREE!

Juneau That You Are On Steroids With Denali?

Supporting multiple versions of SQL Server for a single application typically means headache. And that is what I have been in since I started making commercial applications (banking mostly on the benefits provided by T-SQL). We have not abstracted our databases with non-TSQL objects in the mid-tier for very valid and interesting reasons. In my current situation, I am dealing with SQL Server 2005, 2008, 2008/R2 supporting bits of features from each version. Thank heavens all our clients have upgraded already from version 2000. My world though is still 80% SQL Server 2005, and being still a very formidable version, I don’t think I will be able to part ways with this build anytime soon. Case in point, clients cannot just easily upgrade to higher versions every time one comes out.

As a result, I am mired in managing our SQL Server development assets using various tools. Anything that can help tract and manage everything. Ever since, I had always wished for a tool that will address most of my SQL Server development concerns if not all. All these years, I have heavily relied on SSMS and supported by various other tools. When Database Projects arrive with Visual Studio, it got me interested but it tanks out every time I load it up with what I have. The last one was with Visual Studio 2010. With the sheer number of objects that we have, capacity and performance was always the issue and each time, I had to resort back to SSMS without exploring further those tools.

When Juneau, now officially called SQL Server Developer Tools, or SSDT, was announced as one of the touted tool in the next version of SQL Server  codenamed “Denali”, I was pretty excited about the possibilities it will bring to my situation. After getting my hands on it with Denali CTP3, I say it really got me excited. Though I haven’t tried steroids, based on the effects that I have seen, SSDT (sounds like drug testing haha) gets my nod despite some issues that I have encountered. I am aware that what I got is preview build and for sure Microsoft will eventually iron out those. The benefits far outweigh the quirks that I have seen. It would be worth to upgrade even with SSDT alone.

So what is in SQL Server Developer Tools?

I barely scraped the surface yet but here is what I have found that will a big impact with my situation:

  • It is a unified environment/tool for SQL Server, Business Intelligence, SSIS. Everything I need is provided for within the SSDT shell. I dont have to switch back and forth between Visual Studio, SSMS, and BIDS. Creation of new reports and SQL Server objects such as tables, views, stored procedures, file groups, full text catalogs, indexes … all can be done within SSDT. Existing ones can be added too. When I had the chance to explore MySQL Workbench, I find it cool to have objects that I create scripted first before it is created physically. This capability is now available in SSDT.
  • “Intellisense support for TSQL” for SQL Server 2005. With SSDT, it has not actually upgraded SQL Server 2005 then gave it TSQL Intellisense. TSQL Intellisense is outright available within SSDT regardless of your targetted SQL Server platform. With SSDT, you model your database schema/objects without the tool actually being connected to a targetted version 2005 live database.  This way, you work with your TSQL with Intellisense provided by SSDT, and you can publish your project to an SQL Server 2005 database by choosing 2005 as your target platform. Build/error checks are done prior to the publishing of the database project.
  • SQL Server Platform Targetting. SSDT supports versions 2005, 2008, 2008 R2, Denali and SQL Azure. You do your thing and just select your target platform when you are done. During the build, you will be prompted for platform violations.
  • Refactoring (Rename, wildcard expansion, Fully qualified naming). While in TSQL writing mode, you can rename a field and SSDT does the rest to all affected objects (tables, stored procedures, etc). Neat. No more 3rd party stuff to achieve this. 🙂
  • Supports Incremental Project Publishing. You can keep a development version of your database and upgrade your live version incrementally either via script or while connected to the database. It can even prompt you when your changes/upgrades causes a data loss. Very cool.
  • SSDT detects changes done outside of SSDT. It prompts you during publishing of a database project when it detects objects created outside of the tool. This is necessary so you know if somebody is messing up with your databases. I do wish though that SSDT would have the capability to reverse engineer these detected objects when it is determined to be legit as sometimes, you get to tweak directly the database using other tools like SSMS.
  • Importing of Existing Database Schema (from 2005 to Denali) into a database development project. One need not start from an empty project, existing databases can be dealt with.
  • Scalability. I am glad SSDT successfully imported the more than 20,000 objects in our current database. The previous attempts I had using Visual Studios’ Database Project (both 2008 and 2010) just hanged midstream after hours of importing all the objects. Took SSDT more than 20 minutes to import our database using a lowly test Denali test machine.

This is all for now folks! Regardless of some unwanted quirks I have encountered when using SSDT, I can still say, it is worth looking at. I am just waiting for some decent Denali machine to use so I get to load up my projects in SSDT and take the next level from there.

**************************************************
Toto Gamboa is a consultant specializing on databases, Microsoft SQL Server and software development operating in the Philippines. He is currently a member and one of the leaders of Philippine SQL Server Users Group (PHISSUG), a Professional Association for SQL Server (PASS) chapter and is one of Microsoft’s MVP for SQL Server in the Philippines. You may reach him by sending an email to totogamboa@gmail.com