Wild Bird Photography in the Philippines – Part 3

This is so far a 3 part series of what wild bird photography is to me. I would probably evolve this series over time to make it current as much as possible.
Wild Bird Photography in the Philippines – Part 1
Wild Bird Photography in the Philippines – Part 2

Previously,  in part 1,  I discussed how I came to photographing birds and detailed what you need to get a good start. In Part 2, I gave some ideas where one can usually find birds. In this article, I’ll give an idea how a bird photographer goes out there in the wilderness to look and start capturing beautiful images of birds.

A Birdnut’s Sortie

Often, when bird photographers (or birdnuts) go out, it is simply referred to as a sortie. A sortie is basically a mission to go out and photograph birds. Sorties vary from just a few hours birding away from home to weeks-long missions. Sorties can be done in singles or in groups. Sorties can be nearby, or one needs to travel for hours. Some sorties go from one country to another. Most sorties are done on foot, but there are sorties where birdnuts are in the comfort of their cars or boats. Some bird photography sorties involve laying out an elaborate plan compose of preparing the things needed for the trip, plotting destinations, identifying the risk and dangers associated with it and a lot of things go with it.


Reserva, Baler Sortie with Wild Bird Club of the Philippines

But How Do I Go About My Own Bird Photography Sorties?

I am a weekend warrior.  Though there were times when I went out on a whim, I mostly do my sorties on a weekend. I usually plan days ahead of the trip. I would typically go with a birding buddy or with a group for safety reasons. But I have gone out alone.

I usually prepare all the logistics needed and my gears a day before the trip. I’d check the vehicle for its condition and make sure it is ready to go. I’d have all the batteries recharged. I’d have the GPS properly loaded with routes and waypoints. Then I’d go to the nearby grocery or convenience store for my supply of food and drinks. Here is the usual stuff that I buy and prepare:

  • bottled water (Absolute) and some flavored drinks (Gatorade’s Propel) for rehydration
  • some biscuits (Skyflakes or some other brand) just to fill my tummy when out in the field or when on a long drive
  • some sweets (chocolate bars) to give me some caloric/energy boost when things go tough in the field
  • first aid kit
  • clothing (hat, extra shirts)
  • birding gears

I’d make it a point to get enough sleep but often I do get excited that I can’t sleep properly hours before the trip. Typically, I would wake up early in the morning (e.g. 4:00am for Candaba, 3:00am for Subic, 5:30am if it is just nearby), get some quick shower, then off I go. I usually make a quick stop in the nearby 7-11 store to buy me some packed hot meals for breakfast (I love their liempo on plain rice) and some bags of ice to fill my cooler, before I hit some more roads or pick up my birding buddy.

Once  on the site, it is usually breakfast time for me. 🙂 Then after that .. it is mostly birding, birding, birding! Sometimes I don’t get to rest, sometimes I don’t get to eat when in the field. What my day is like will all depend on how many and what kinds of birds I encounter. Then I go home!

My Most Memorable Birding Sorties

For more than a couple of years now, I have been to several birding places all over the country. My most memorable was getting stranded in Subic in 2009 while Metro Manila was being flooded by a record breaking storm Ondoy (Typhoon Ketsana). My birding buddy Dennis and I were just clueless what was about to happen during that day. The next thing we knew, Metro Manila had its most catastrophic flooding while we got stuck somewhere in Subic and left no other choice but to wait out until the storm passes through. Another most memorable birding trip I had was when I had gone with by fellow birdnuts in Mindoro. We had a chance to capture in photograph the rare Scarlet-collared Flowerpecker (Dicaeum retrocinctum).


Mindoro Sortie with Bulbuleros 400

The Risks

  • The hardest part is when going home. Often, you are so tired to move and drive yourself home. When I am with a birding buddy, we usually alternate on the wheels to lessen the risk of getting so sleepy on the road while driving for home. If sleepiness is unbearable, I/we stop to take a nap.
  • Insect bites. I remember a birding buddy of mine (Dennis), got hospitalized due to insect bites, after we went to the forest of Subic.
  • Wild bee stings
  • Snake bites
  • Getting attacked by wild animals
  • Getting stranded
  • Encounters with some not so really nice people (thieves, illegal loggers, etc). I guess the worst thing that will happen to you is when you get kidnapped by known terrorist groups and you would have to save your life as what happened to a fellow bird photographer. I happen to cross paths and shoot birds with Ivan personally one weekend in La Mesa Ecopark. Such a great guy by the way.
  • Weird accidents
  • Being flown off by a giant jungle woodpecker and be brought to its hole/nest on top of a 500 meter tree

The Lows

  • Going home empty handed
  • Going home with a ruined camera or a broken lens
  • Getting to the birding site only to find out you left your birding lens

The Highs

  • Getting stranded
  • Getting a rare photo lifer. In bird photography lingo, a lifer means you got to photograph a bird species for the first time
  • Getting a close up of a very beautiful bird
  • Getting a good photo of a target bird
  • Bump in the field with fellow birdnuts
  • Lastly, getting attacked by a thousand sunbirds

Wild Bird Photography in the Philippines – Part 1
Wild Bird Photography in the Philippines – Part 2

Check out my album of Philippine birds!

Thoughts on Designing Databases for SQL Azure

Cloud has been the new darling in the world of computing and Microsoft has provided once again developers a new compeling platform to build their new generation applications on. In this picture enters SQL Azure, a cloud-based edition of Microsoft’s leading database product, Microsoft SQL Server. One would ask how one designs an RDBMS database for the cloud and specifically for the Azure platform. It is basically easy to create a simple database in SQL Azure but database creation and database design are two different life forms. So we ask, how are databases be in the cloud and in SQL Azure? In this blog, I hope to provide an answer.

Scale Out

First and foremost, the cloud has been one of the primary solutions to consider when an application is in the realm of a massive scalability issue. Or to put it in another perspective, the Cloud was both a catalyst of the emergence of extremely large applications and a result of the webification of a lot of things. Design considerations need to factor in unheard of scales that can span from almost everywhere. Bandwidth needed is of the global scale, hard disk space are in the petabytes, and users are in the millions. When you talk about building an application to handle this scale, one can’t do anymore the usual stuff any one of us used to do. One now needs to think massively in terms of design. E.g. No single piece of hardware can handle the kind of scale I just mentioned. So how does one design a simple database that is intended to scale up to the degree that I just mentioned?

The Problem

Supposing, we have a great idea of developing the next big thing and we named it FreeDiskSpace. FreeDiskSpace allows us to upload massive amounts of files where files can be anything one can think of. The scale of this next big thing is thought to be in the range of having 500 million users scattered around the globe with each users allowed to upload an unlimited amount of files.

The Challenge

With such order of magnitude, the many challenges we will be facing will definitely lead and force us to question every bit of our knowledge and development capability that we might have been using when we are used to doing gigs in the few GBytes of databases with a few thousand users. If what is allowed makes each user upload files in the range of not less than 5GB, one would need to store 2,500 petabytes worth of files. How does one store 2,500 petabytes worth of files and manage the uploading/downloading of 500 million users?

In the Cloud, you are less likely to see a bunch of supercomputers handling this kind of scale. Rather, you are more likely to see hundreds of thousands of commodity servers woven together forming a cohesive unit. Making use of these mesh to behave like it is as if it is operating as a unit requires design patterns that are more appropriate for these environments.

Database Sharding in SQL Azure

First, what is database sharding. To put it simply, database sharding is a design pattern to split up data into shards or fragments where each shard can be run on its own on a separate instance running on a separate server located somewhere. Each shard contains the same structure as that of another but contains different subsets of data.

Current version of SQL Azure offers NO feature that handles automatic scaling. This will surely change in the near future as everybody doing applications handling these massive loads will somehow need an automatic way to allow the application to handle this amount of load and data easily.  One way that is guaranteed to be commonly used will be sharding databases. However for now, sharding in SQL Azure isn’t automatic. If one needs to shard, one needs to do it by hand. As I have said, in the future, Microsoft will be compelled to give us an easy way to practice sharding. Hopefully, that future is near.

Now we would ask, why do we need to shard databases in SQL Azure. The cloud and specifically with SQL Azure, with its limitations, forces us to use this design pattern. SQL Azure only allows us 50GB for a database’s maximum size on a virtual instance running on an off the  shelf commodity server. With that, we surely need a way to work around these constraints if we see our data to grow for like 500 terabytes. Second, if we foresee a huge number of users hitting our applications, doing this on a single piece of hardware would spawn concurrency nightmares that would choke our databases to death.

Though there are other approaches to solve our scaling challenges, there are other interesting motivating factors where sharding becomes a very compelling option. Firstly, it is more likely for a database server to process small databases faster than larger ones. Secondly, by utilizing multiple instances of computing power, we can spread processing load across these instances to more processes can be executed all at once.

How To Shard

Let us try to go back with our FreeDiskSpace killer app (assuming we provide each user up 1GB in database space) and practice some very simple sharding so we have a partial understanding of what sharding is all about. If we are to design the database using a single instance, the table would look like this:

Server: ABC
Database: DB1
Table: userfiles
userfiles_pk user uploaded file country_code
1 john file1 grmy
2 john file2 grmy
3 allan file1 can
4 allan file2 can
5 allan file3 can
6 edu file1 grmy
7 roman file1 ity
8 roman file2 ity
9 jon file1 can
10 jon file2 can
11 jon file3 can
Server: ABC
Database: DB1
Table: country
country_code country
grmy germany
can canada
ity italy

If we are to shard, we need to decide on how we split up our database into logical, manageable fragments. We then decide what data is going to occupy each fragment (we can probably call the occupier our “tenant”). Based on tables we see above, there are 2 fields that we can potentially use as keys for sharding. These fields are user and country. These fields should provide answers to a question: How to we sensibly split up our database? Do we have to split it up by user, or by country? Both are actually qualified to be used as basis for splitting up our database. We can investigate each though.

Sharding by field: User

Server: ABC
Database: DB1
Table: userfiles
userfiles_pk user uploaded file country_code
1 john file1 grmy
2 john file2 grmy
Server: ABC
Database: DB1
Table: country
country_code country
grmy germany
can canada
ity italy
Server: CDE
Database: DB1
Table: userfiles
userfiles_pk user uploaded file country_code
3 allan file1 can
4 allan file2 can
5 allan file3 can
Server: CDE
Database: DB1
Table: country
country_code country
grmy germany
can canada
ity italy

By deciding to make each user as a tenant in our database, the following statements would be true:

  • Our shard would only occupy data related to a single user.
  • Various shards can be in the same server. Shards don’t need to be in separate servers.
  • It would render country table a bit useless and awkward. But we can always decide to get rid of it and just maintain a column indicating a user’s country of origin.
  • Now with a one-user-one-database structure, 1 million users mean 1 million separate databases. In SQL Azure’s terms, assuming a 1GB costs us 10$ a month, designing our shards by user means we have to spend $10 million monthly. Now that is no joke!

Sharding by field: Country

Server: ABC
Database: DB1
Table: userfiles
userfiles_pk user uploaded file country_code
1 john file1 grmy
2 john file2 grmy
6 edu file1 grmy
Server: ABC
Database: DB1
Table: country
country_code country
grmy germany
can canada
ity italy
Server: CDE
Database: DB1
Table: userfiles
userfiles_pk user uploaded file country_code
3 allan file1 can
4 allan file2 can
5 allan file3 can
9 jon file1 can
10 jon file2 can
11 jon file3 can
Server: CDE
Database: DB1
Table: country
country_code country
grmy germany
can canada
ity italy

By deciding to make country as a tenant for each of our database, the following statements would be true:

  • Our shard would only occupy data related to a country.
  • Various shards can be in the same server. Shards don’t need to be in separate servers.
  • It would render country table unnecessary as each database represent a country. But we can always decide to get rid of this table.
  • Asnwer.com says there are only 196 countries in the world. With a one-country-one-database structure,  this means,  if we are getting the larger 10GB, 100$ a month SQL Azure databases, we only need to spend $19,600 a month.
  • This design decision though is pretty useful until we ran out of space in our 10GB design to contain all users related to a country. There are solutions to this problem though. However, I will try to discuss this in my future blogs.

It is pretty clear that picking the right key for sharding has obvious ramifications. It is very important for us to determine which ones are appropriate for our needs. It is obvious that picking country is the most cost-effective in this scenario while picking user could shield you much longer on the 10GB limit problem.

This article only presents partial concepts of sharding but nevertheless has given one the very basic idea of how database sharding can be applied, and how all these translate when you choose SQL Azure as a platform.

Thoughts on Designing Databases for SQL Azure – Part 2

**************************************************
Toto Gamboa is a consultant specializing on databases, Microsoft SQL Server and software development operating in the Philippines. He is currently a member and one of the leaders of Philippine SQL Server Users Group, a Professional Association for SQL Server (PASS) chapter and is one of Microsoft’s MVP for SQL Server in the Philippines. You may reach him by sending an email to totogamboa@gmail.com

Toto Pictures : Surfin Urbiztundo

I shot these photos last November 29, 2008 during the La Union Grommet Championship 2008 in Urbiztundo, San Juan, La Union. I am actually not a sports photographer but when I get to shoot them, I found that they are very excellent subjects. I enjoyed shooting them all day long when I was there.

It has been some time now and I really want to photograph them again. Meanwhile, here are a few shots of surfing young kids that are 16 and below (groms) that participated in the tournament. Attached are the settings I use for these shots.


Nikon D300, Sigma 150-500mm, Ballhead + Tripod
Shot at 500mm, f7.1, 1/500 secs, ISO-200


Nikon D300, Sigma 150-500mm, Ballhead + Tripod
Shot at 500mm, f7.1, 1/500 secs, ISO-200


Nikon D300, Sigma 150-500mm, Ballhead + Tripod
Shot at 500mm, f8, 1/1250 secs, ISO-200

I am hoping to go back one of these days and shoot these surfers again. This time, as a Canon user 🙂

Going Down The Decision Making Lane: Azure vs Hosted vs On-premise

Cloud computing has brought new possibilities and challenges to ISVs like my company. It has also brought along a lot of seemingly difficult questions that has no direct answers. However, for anyone looking at the possibility of investing its future on new technologies, a lot of questions needed some answers.

In this blog, I have come up with a list of concerns, cases, and questions that would usually be essential when choosing the right decision. To make things a lot clearer, I would probably use a real world example based from my own circumstances to weigh things down between competing platforms that are all worth exploring.

My Case

In my last cloud computing related blog (http://totogamboa.com/2010/12/11/my-first-shot-at-sharding-databases/), I mentioned one of my company’s products that is being considered going beyond the usual realm of on-premise applications. In this blog, I continue to explore new possibilities for this application as we enter a new era of cloud computing.

The obvious question for our company now is where to go from here as far as our library system is concerned.

My Options

I have with me several platforms to choose. All is begging to be explored further, all is wanting attention, to be leveraged, used and abused. Since our library system is heavily glued to Microsoft technologies, my shortlist would naturally explore the several platforms where Microsoft technologies claim market and mindshare dominance.

  1. On-Premise. Microsoft boast a powerful array of on-premise technologies that anyone can take advantage of and have their products comfortably sit on. I’d say that I would be looking up for a typical setup of Windows Server, Internet Information Services/ASP.NET, and SQL Server sitting in our client’s datacenter.
  2. Hosted. Microsoft also has proven strongly that their technologies can be set to serve as hosts over the Internet. This can be proven by the large number of providers of hosting sites that offer Microsoft technologies.  Hosting providers typically offer the same set of technologies one can setup on-premise. Typical setup would be Windows Server, Internet Information Services/ASP.NET, and SQL Server hosted in the provider’s datacenters.
  3. Azure. The relatively new kid on the block is the stack of Microsoft technologies designed mainly as services over the Internet. The stack is composed of Azure technologies and the entirety is touted to be the platform of the future. The new setup includes Windows Azure, a web role type supporting IIS/ASP.NET, and SQL Azure. All these are managed by Microsoft and situated in their humungous datacenters scattered around the globe.

My Possibilities

Picking a choice isn’t always easy, especially when matters discussing the right software solution for an individual’s case. But I would like to attempt to see where I would go from here as far as our library system go. With today’s options and the complexity that comes along with each at our disposal, things, like getting a sound technology decisions, are harder to come by than before.

In this exercise, I prepared important points that need to be clarified to come up with the right, if not better solution. Do take note though that I am not generalizing here by just focusing on technologies that Microsoft provides out there. Remember I am working on answers based only on my circumstances and published specifications pulled from the web.

Here is a compilation of my personal list of concerns in trying to go through a decision making process that would shape up the future of our library system.

Concerns Azure (As of Dec 2010) Hosted On-Premise
What hardware do I have to provide? front side only front side only front and end sides
Is redundancy possible? Yes Yes Yes
Is scaling up possible? Up to 8 CPU (8 x 1.6GHz) only16GB RAM Yes by special arrangement Best money can buy
Is scaling out possible? Yes Yes Yes
Is scaling down possible? Yes No No1
Is Full Text Search available? No Yes Yes
Is database backup & restore for SQL Server available? No. Only possible via SSIS. Yes Yes
Need to rewrite library system? Yes No No
How do I scale out? SQL Azure / sharding SQL Server / sharding SQL Server / sharding / horizontal partitioning
Administration/operation of infrastructure Done by Microsoft Done by provider Done by client/us
Visual Studio Support Yes Yes Yes
ASP.NET Webforms or MVC Support Yes Yes Yes
SQL Server Support SQL Azure SQL Server SQL Server
SQL Server Degree of Capability Subset of SQL Server only Full feature Full feature
Platform Portability If I write on Azure, it will run only on Azure, unless there is a small scale on-premise Azure that can be deployed in the future. If I write on Hosted Environment, it can run also On-Premise If I write On-Premise, it can also run Hosted

1 There is wastage of hardware resources if system is not designed to scale down. Hardware stays when capacity is scaled down. It only remains underutilized.

Summing It Up

At this point in time, the best option out there for my company is to remain in Hosted environments, as seen in information provided in the table of concerns above. All of my requirements needed from SQL Server are available. It is simply that there are still the missing pieces in SQL Azure that prevents us from fully adopting it. The company should also start distancing itself from on-premise implementations. This doesn’t mean though that both Azure and on-premise options are no good. Azure is shaping up to be a formidable platform of choice, especially for projects that are yet to be developed. Soon, and I am quite sure fulltext search capability will be supported in SQL Azure. When that happens, I will have to go through this same exercise all over again.

SQL Azure

In the end, as an database person, it has always been the issue of which RDBMSes may provide for me in various platforms available out there. For now, it is still the non-cloud version of SQL Server that seals the deal for me. SQL Azure isn’t far behind. It is just a fulltext search away! 🙂

**************************************************
Toto Gamboa is a consultant specializing on databases, Microsoft SQL Server and software development operating in the Philippines. He is currently a member and one of the leaders of Philippine SQL Server Users Group, a Professional Association for SQL Server (PASS) chapter and is one of Microsoft’s MVP for SQL Server in the Philippines. You may reach him by sending an email to totogamboa@gmail.com

My First Shot At Sharding Databases

Sometime in 2004, I was faced with a question whether to have our library system’s design, which I started way back in 1992 using Clipper on Novell Netware and later was ported to ASP/MSSQL, be torn apart and come up with a more advanced, scalable and flexible design. The usual problem I would encounter most often is that sometimes in an academic organization, there could be varying structures. In public schools, they have regional structures where libraries are shared by various schools in the region. In some organizations, a school have sister schools with several campuses each with one or more libraries in it but managed by only one entity. In one setup, a single campus can have several schools in it, with each having one or more libraries. These variations pose a lot of challenge in terms of programming and deployment. A possible design nightmare. Each school or library would often emphasize their independence and uniqueness against other schools and libraries, for example wanting to have their own library policies and control over their collections and users and customers and yet have that desire to share their resources to others and interoperate with one another. Even within inside a campus, one library can even operate on a different time schedule from the other library just a hallway apart. That presented a lot of challenge in terms of having a sound database design.

The old design from 1992 to 2004 was a single database with most tables have references to common tables called “libraries” and “organizations”. That was an attempt to partition content by libraries or organization (a school). Scaling though wasn’t a concern that time as even the largest library in the country won’t probably consume a few gigs of harddisk space. The challenge came as every query inside the system has to filter everything by library or school. As features of our library system grew in numbers and became more advanced and complex, it is apparent that the old design, though very simple when contained in a single database, would soon burst into a problem. Coincidentally though, I have been contemplating to advance the product in terms of feature set. Flexibility was my number one motivator, second was the possibility of doing it all over the web.

Then came the ultimate question, should I retain the design and improve on it, or should I be more daring and ambitious. I scoured over the Internet for guidance of a sound design and after a thorough assessment of current and possibly future challenges that would include scaling, I ended up with a decision to instead break things apart and abandon the single database mindset. The old design went into my garbage bin. Consequently, that was the beginning of my love of sharding databases to address issues of library organization, manageability and control and to some extent, scalability.

The immediate question was how I am gonna do the sharding. Picking up the old schema from the garbage bin, it was pretty obvious that breaking them apart by libraries is the most logical. I haven’t heard the concept of a “tenant” then, but I dont have to as the logic behind in choosing it is as ancient as it can be. There were other potential candidate for keys to shard the database like “schools” or “organization”, but the most logical is the “library”. It is the only entity that can stand drubbing and scrubbing. I went on to design our library system with each database containing only one tenant, the library. As of this writing, our library system have various configurations: one school have several libraries inside their campus, another have several campuses scattered all over metro manila with some campus having one or more libraries but everything sits on a VPN accessing a single server.

Our design though is yet to become fully sharded at all levels as another system acts as a common control for all the databases. This violates the concept of a truly sharded design where there should be no shared entity among shards. Few tweaks here and there though would fully comply with the concept. Our current design though is 100% sharded at the library level.

So Why Sharding?

The advent of computing in the cloud present to us new opportunities, especially with ISVs. With things like Azure, we will be forced to rethink our design patterns. The most challenging perhaps is on how to design not only to address concerns of scalability, but to make our applications tenant-aware and tenant-ready. This challenge is not only present in the cloud, but a lot of on-premise applications can be designed this way. This could help in everyone’s eventual transition to the cloud. But cloud or not, we could benefit a lot on sharding. In our case, we can pretty much support any configuration out there. We also got to mimic the real world operation of libraries. And it eases up on a lot of things like security and control.

Developing Sharded Applications

Aside from databases, applications need to be fully aware that it is not anymore accessing a single database where it can easily query everything with ease without minding other data exists somewhere. Could be on a different database, sitting on another server. Though the application will be a bit more complex in terms of design, often, it is easier to comprehend and develop if you have an app instance mind only a single tenant as oppose to an app instance trying to filter out other tenants just to get the information set of just one tenant.

Our Library System

Currently our library system runs on sharded mode both on premise and on cloud-like hosted environments. You might want to try its online search:

Looking at SQL Azure

Sharding isn’t automatic to any Microsoft SQL Server platform including SQL Azure. One needs to do it by hand and from ground up. This might change in the future though. I am quite sure Microsoft will see this compelling feature. SQL Azure is the only Azure based product that currently does not have natural/inherent support for scaling out.  If I am Microsoft, they should offer a full SQL Server service like shared windows hosting sites do along side SQL Azure so it eases up adoption. Our systems database design readiness (being currently sharded) would allow us to easily embrace the new service. But I understand, it would affect, possibly dampen their SQL Azure efforts if they do it. But I would try to reconsider it than offering a very anemic product.

As of now, though we may have wanted to take our library system to Azure with few minor tweaks, we just can’t in this version of SQL Azure for various reasons as stated below:

  • SQL Server Full Text Search. SQL Azure does not support this in its current version.
  • Database Backup and Restore. SQL Azure does not support this in its current version.
  • Reporting Services. SQL Azure does not support this in its current version.
  • $109.95 a month on Azure/SQL Azure versus $10 a month shared host with a full-featured IIS7/SQL Server 2008 DB

My Design Paid Off

Now I am quite happy that the potentials of a multi-tenant, sharded database design, though is as ancient, it is beginning to get attention with the advent of cloud computing. My 2004 database design is definitely laying the groundwork for cloud computing adoption. Meanwhile, I have to look for solutions to address what’s lacking in SQL Azure. There could be some easy work around.

I’ll find time on the technical aspects of sharding databases in my future blogs. I am also thinking that PHISSUG should have one of this sharding tech-sessions.

**************************************************
Toto Gamboa is a consultant specializing on databases, Microsoft SQL Server and software development operating in the Philippines. He is currently a member and one of the leaders of Philippine SQL Server Users Group, a Professional Association for SQL Server (PASS) chapter and is one of Microsoft’s MVP for SQL Server in the Philippines. You may reach him by sending an email to totogamboa@gmail.com

Speed Matters : Subquery vs Table Variable vs Temporary Table

This has been perhaps written quite a number of times by a lot of TSQL gurus out there but earlier I was presented with a TSQL code that prompted me to recheck my notions on subqueries, table variables and temporary tables. The TSQL code presented to me is something built dynamically that depends heavily on user inputs. The issue presented was performance. The code used to run acceptably fine until earlier when things are annoyingly slow.

Upon seeing the code, I said to myself, problem solved. Instantly, I thought I was able to spot what is causing the performance issue. The code was written using TSQL’s temporary tables. Based from experience, I would automatically shoot down any attempt to use temporary tables when executing a TSQL batch. And though Microsoft has introduced table variables as a better alternative to temporary tables, better in terms of technicality and programmability, delving with the two options would always be my last resort. In most cases, I’d rather tackle difficult scenarios requiring handling of temporal data using subqueries. In fact, I immediately ruled a rewrite of the code to minimize the use of temporary tables, and at least replace it with table variables which were supposed to be a lot better. But upon further investigation, I ruled a rewrite because of some other things not related to the topics at hand here. On the other hand, am trying to figure out how things can be converted using subqueries which I would primarily prefer.

In my investigation though, something interesting came up and I thought I’d better time the executions of my experiment while I make the templates of how things are done using subqueries. The results now intrigues me a bit, and probably would take a second look at the code I was shown earlier. I might be looking at the wrong place. Nevertheless, the results of my experiment might interest others out there. So here it is:

The Experiment 

I always love to use one of our company’s product, the library system, to test my experiments with anything to do with TSQL as its database is as diverse in design and available test data is in great abundance. I picked one small database sample for my experiment and this is how it went:

  • Test 1 is intended to check how different approaches fare on small sets of data. I have to query all library materials that were written in ‘Filipino’ (which would result into 974 rows out of the 895701 rows in a single table). Based on the result, I have to search for a string pattern “bala” in all of the materials’ titles using the LIKE operator which would eventually result to 7 rows.
  • Test 2 is basically the same query against the same environment with different query conditions to see how the three approaches fare on large sets of data. So I queried all library materials that were written in ‘English’ (which would result into 43896 rows out of the 895701 rows). Based on the result, I have to search for a string pattern “life” in all of the materials’ titles using the LIKE operator which would eventually result to 440 rows.

I wrote three queries that each would use a specific approach to query my desired output in this experiment.

First query using SUB QUERIES:

Second query uses TABLE VARIABLES:

The third and last query uses TEMPORARY TABLES:

I run all three queries (3x each) after a service restart to clean-up tempdb and I got the following results. Figures are presented in the number of seconds:

  Total Rows Temporal Data Count Final Result Sub Queries Table Variables Temporary Tables
Test 1 895701 rows 974 rows 7 rows 2 secs 4 secs 2 secs
Test 2 895701 rows 43896 rows 440 rows 3 secs 23 secs 4 secs

 

The results pretty much proved interesting. I thought, based on the queries I wrote that temporary tables would be slightly slower of the three approaches. The execution plan for table variables and temporary tables are pretty much the same. I was a bit surprised that the execution time of the table variable is almost twice that of a temporary table on small sets of data but temporary tables trumps table variables by a large margin in large set of data. I was expecting a slight difference only as, though both are a bit different in terms of scoping and technicality, and to some extent purpose, both uses TEMPDB as their memory (contrary to the notion that table variables use in-memory) when handling data. What is notable though is that, disk I/O is costlier with Table Variables. I am not sure at this point what causes this cost. Will try to dig deeper when I have more time.

Nevertheless, this particular experiment does not provide a general conclusion that one of the three approaches is better than among the three. I still subscribe to the general notions:

  • That sub queries are neat and would perform better in most if not all cases
  • That temporary tables would be slower on environments with heavy concurrency as sql server would handle more locks especially when using it within a transaction. I have not tested this scenario here though.
  • That temporary tables may be a faster solution if constraints/indexes are needed in between queries, especially when a large set of data is expected, statistics on table variables columns arent created and it can’t have indexes. But surprisingly, this experiment shows Table Variables performed poorly among the three with handling large data set as being its slowest.
  • That table variables are a good option technically/programmatically

You just be the judge. It would be prudent to conduct tests. I can’t say my experiment here is conclusive at best, but from hereon, I would avoid using Table Variables whenever I can. For very simple queries like in the tests, it performed poorly. I can’t imagine how things will be with larger datasets and more complex queries.

I am on to my next experiment!

**************************************************
Toto Gamboa is a consultant specializing on databases, Microsoft SQL Server and software development operating in the Philippines. He is currently a member and one of the leaders of Philippine SQL Server Users Group, a Professional Association for SQL Server (PASS) chapter and is one of Microsoft’s MVP for SQL Server in the Philippines. You may reach him by sending an email to totogamboa@gmail.com

Photography : Getting Started

Some of us have already gotten our DSLRs this early from Santa and are probably wondering how all the switches, buttons, levers and settings work to get that perfect photo. And we now want to be in control of our camera so I have come up with a simple guide for us to know the stuff that matters in capturing a photo. Whether you do it yourself or have the camera’s computer do it for you, it is essential that you know some fundamentals in photography. Simply put, all we need to know for now are the very basic elements that matter most in coming up with a photo. 

Let’s get started!

Exposure, is a process where we control the amount of light that goes through our lens to our DSLR’s sensor. With too much light, we get an over-exposed photo. An over-exposed photo is typically a photo that is just too bright. In other cases, the photo comes out all too whitey. On the other hand, with less light, we get an under-exposed photo. With an under-exposed photo, you usually end up with a dark to almost black photo. With the right amount of light, we get to see photos just as we see on the scene or in our viewfinder or LCD screen. 

Controlling the Light 

We might be wondering now how are we going to control the amount of light that goes through our lens to our camera’s sensor? There are two ways to do this, and these are controlling two important elements: 

  • Aperture. Aperture is basically a hole or opening in our lenses that light can pass through. When we talk about these holes, we often associate it with how big or small these holes are and by design, the size of the hole in our lenses can be varied as we desire. In lens jargon, these holes are measured as shown in the following:
     

    Aperture Size of hole
    f3.5 large
    f4 medium
    f5.6 small
    f10 smaller

We might be wondering that the larger the number associated with “f”, the smaller the size of the hole. That must be a confusing measurement convention but that is how things are in photography and we just have to accept it that way. Humans always have a knack at confusing others.

  • Shutter / Shutter Speed. A shutter is a simple mechanism that opens to allow light going through our lens to reach the sensor, and closes to prevent it. When our camera is about to take a photo, the shutter’s position is closed, then opens up to allow light to come in then closes again to prevent it. The time it takes for our camera’s shutter to open and then close can be long or short and is measured in terms of speed. Thus we have a factor known as shutter speed.
    Shutter Speed (in seconds) Speed
    1/200s or 200th of a second faster
    1/60s or 60th of a second fast
    1/30s or 30th of a second slow
    1/5s or a 5th of a second slower

By knowing how to control these two elements, (1) the aperture; and (2) the shutter speed, we now have the means to control the amount of light that goes through our lenses to the camera’s sensor. We just have to remember the following:

  • The larger the hole, the more light will come in.
  • And by opening the shutter too long before we close it, the more light will come in.
  • If the scene we are photographing is dark, we either set our aperture to its largest setting so more light will come in or slow down our shutter speed so we open the shutter much longer to allow more light to come in. We can do both.
  • If the scene we are photographing is too bright, we can do the opposite.

By correctly mixing the two, we will achieve correct exposure. 

To do this ourselves in our DSLR, we set exposure control to Manual and try adjusting the aperture and shutter speed settings while shooting until we are familiar with the correct settings given the intensity of light on a certain scene. Of course, if you feel these are too much work for you, you can set your camera’s computer to figure out the right aperture and shutter speed settings. You can do this using Program or Auto mode. There are of course other means to help us get that correct exposure but for now, this is probably what we need to know how our camera works at the core. 

Hope this helps. More to come!

Happy shooting!

Rationalizing Cloud Computing

For the past couple of years, people have been hot on cloud computing bandwagon. And as in any case of a birth of a new trend, people tend to get into misconceptions that eventually led them to get burned. Some just plainly embrace something without bothering to find out what is in it for them.

As a co-owner of a very small ISV trying to grapple with trends in computing and making things work for clients and stakeholders, I would always submit to the pressures of looking into the possibility of taking advantage of what a new trend can offer, cloud computing included. And during these times, I have significantly looked into what cloud computing brings to the table and how my company can take advantage of them. I have at least looked into Azure and non-Azure offerings and have attempted to assemble a design for our company’s products that can take advantage of the good stuff that are in the cloud.

Though my company is definitely a cloud computing enthusiast as some efforts are actually being done to take advantage of the cloud, as far as I am concerned, it won’t be for everybody. But recently, there is a rise of people who are investigating their possibilities with the cloud. This is maybe due to the advent of Microsoft’s farm betting Azure or perhaps there are just too many misconceptions peddled here and there. In most cases, I always see some wrong assumptions being tossed into every cloud discussion that I have had. Clients too are asking me about possibilities of being in the cloud based on wrong assumptions fed to them.

But before I get into those misconceptions, I would highlight the traits that made me and my company embrace cloud computing:

  • Cloud is the great equalizer. For very small ISVs like my company, I can see cloud as a leveling of playing field against the big guys, at least on the issue on having to spend upfront on system infrastructure to offer something for my market. For under $100 a month, a small ISV can start big.
  • Reach. My company would be able to increase reach without much overhead.
  • Scalability. The fact that I would be able to scale as the need arises, cloud computing is definitely for companies like mine.
  • Administration. Being a very small ISV, system administration tasks such as deployment, support and maintenance takes too much toll on our internal resources. Having an easily accessible uniform environment like the cloud, it would allow my company to increase the number of clients to attend to without adding too much stress on our internal resource.

There are other niceties that made me embrace cloud computing, but the above items mentioned are the major factors that I believe can only be achieved through the cloud.

As convinced as I am that cloud computing is one area that my company should invest, I am also convinced that cloud computing is definitely not for everybody and definitely not for every situation out there, at least for its current incarnation. I’d probably list down 5 misconceptions that I often encounter. My rationalization of these misconceptions will be based on what I know based on the efforts my company is doing in the cloud. These includes mostly Azure based clouds as well as few non Azure ones.

Top 5 Misconceptions I Often Encounter

  1. When I move my existing application from on-premise to cloud, my app will become scalable. Well, NOT without rewriting your apps to scale-out and be cloud-ready. In the cloud, scaling is not all automatic. For example, If you have an application that is not designed to scale out, it will not scale-out in the cloud. By scaling out, you have to know how to partition your storage in multiple storage instances. And instances can be spread geographically across continents. To scale-out, one has to start from scale-out design patterns, probably from scratch. Lucky you are if your current application is designed to scale-out.
  2. I can easily move current on-premise application to the cloud. Contrary to peddled notion that you can easily move current on-premise application (legacy) to the cloud, it is really based on what you actually expect to benefit from the cloud. Like in Azure for example, it supports moving legacy applications to the cloud by using virtual machines to host legacy apps (e.g. hosting a Win Forms based app). But this approach for me is inefficient unless all you want to do is transfer from on-premise to the cloud to perhaps get rid of system administration issues while ignoring other implications and issues like performance, latency, security, etc. You would have to figure out new issues as simple as backing up your data in the cloud. If your legacy app supports features available in on-premise environments, you might encounter problems running them in the cloud as current generation clouds do not support all features that are mostly staple in the on-premise environments.
  3. I have been designing on-premise apps for years, it is enough skill set to get me to the cloud. It is if you were designing on-premise apps using technologies and patterns similar to that available in the cloud. If not, you have to take a look at the cloud differently. If you want to write apps for the cloud, you have to think cloud. Scaling is probably the one issue that would force you to think cloud. If we are used to design our apps with a single instance mindset, it is about time to think differently and think multiple. Though of course, no one is stopping us to write apps to the cloud that way, but that is not real cloud apps are supposed to. Single instance design won’t scale even if you run it over current generation cloud. In SQL Azure, we have a 10GB database size limit. If we exceed the 10GB ceiling, what would our apps do? This seemingly easy to answer question could probably discourage anyone to embrace cloud.
  4. I just have to design apps like I use to and deploy it in the cloud, and the technologies behind it takes care of the rest. This can’t be. If you were into scale-out designs by now, chances are, you wouldn’t think of writing an app and deploying it over the cloud. Chances are, you might have deployed a cloud app already or you probably didn’t know that your design approaches are very much like those implemented over the cloud. But for most, I doubt that it’s the case. Most on-premise apps are designed not to scale-out but to scale-up. For example, most SQL Server implementations just take advantage of new hardware to scale as added load comes. Intrinsically, Microsoft SQL Server is not designed to automatically load balance, for example, a single query statement. One has to learn how to partition tables into a single view with underlying partitions spread across a federation of servers so SQL Server load balances a query. This is not supported in SQL Azure though. However, even if this is supported in SQL Azure in the future, this is not the scaling wonders the cloud provides, your application still sees only a single instance of a view. With today’s multi-million users accessing just one web application at one point in time, you can only scale as much. With cloud computing, your scaling possibilities are like crazy. You don’t have limits in terms of servers, of CPUs, of storage. Having resources like this would force you to rethink of the way you design your applications. The cloud isn’t about the technologies behind. For me it is one big design consideration. It is about the way you design your applications to take advantage of the unlimited resources found in the cloud.
  5. Any apps can or should be placed in the cloud. No one will stop you if you place anything regardless of how it will end up in the cloud. However, there could be applications that are better off off the cloud. For example, if you have an app with one massive synchronous process that you think has no way of being broken down into several pieces, you might as well stick to scaling-up on premise (unless your cloud offers hybrid services like allowing you to scale-up with a very powerful server). For apps requiring heavy informational security and if you aren’t comfortable with the notion that someone from your cloud provider might get into your highly secured data, you might as well have things on premise. There could be a lot of other applications that may not be practical in placed in the cloud.

It is very important that the cloud is understood clearly before everyone gets excited. While I am embracing it, I am welcoming it with a cautious attitude. Being relatively new, there are just too many considerations to think of, and too many questions that don’t even have clear answers for the time being.

**************************************************
Toto Gamboa is a consultant specializing on databases, Microsoft SQL Server and software development operating in the Philippines. He is currently a member and one of the leaders of Philippine SQL Server Users Group, a Professional Association for SQL Server (PASS) chapter and is one of Microsoft’s MVP for SQL Server in the Philippines. You may reach him by sending an email to totogamboa@gmail.com

Wild Bird Photography in the Philippines – Part 2

This is so far a 3 part series of what wild bird photography is to me. I would probably evolve this series over time to make it current as much as possible.
Wild Bird Photography in the Philippines – Part 1
Wild Bird Photography in the Philippines – Part 3

Previously, on Wild Bird Photography in the Philippines – Part I, I discussed how I got into it and the things one needs to have to start photographing wild birds.

In this article, I’ll list down where you would often find birds. And you bet it right, you probably have been thinking that it is always in a zoo. 🙂 But you may have noticed though that I am prefixing bird photography with the word ‘wild’ in my previous article, and this is because bird photography can also include photographing birds in captivity. It may be cute to photograph birds inside a cage, but wild bird photography is for the real bird photographers.

WHERE THE BIRDS ARE

The next most asked question one would ask me regarding this interest is where I get to photograph these birds. In the Philippines, there are over 600+ species of birds that you can shoot. And the number has been increasing as some new sightings of species that don’t usually range in the country. Some are commonly seen, some seldom seen, and some have never ever been photographed. With the country’s 7100+ islands, one can imagine how dispersed our avian friends are in this archipelago. Some birds can only be found in certain islands in the country. For example, a Tiger Shrike (Lanius tigrinus), which can be found in some Eastern Asian countries, has only been recorded to have occurred only once in Jolo, Sulu sometime in 1887 and this is according to A Guide to the Birds of the Philippines. So if one is really bent on taking chances and going for a Tiger Shrike expedition, one should go to Jolo, Sulu.

So far, I have only been to a few places to do some serious bird photography. The farthest up north I had was in Pagudpud, Ilocos Norte and down south was in Negros Occidental. I am hoping to go to more birding sites as I progress.

So where do I find birds?

There are lots of common and unusual places where birds can be found. You just have to know and find the reasons why they are there. First, there are different types of birds, and each type has their specific habitats. For example, there is what we call shorebirds and they are usually found on our shorelines. Second, we have to know why they get attracted to certain places. It could be that there are lots of food and water sources in the area or they may be feeling secured in one particular area.

On the other hand, knowing when birds aren’t in the area can help a lot too. If the area doesn’t have any fruit tree, you would know there is less chance seeing a fruit eating bird. If the area allows rampant hunting, probably some birds will leave if they feel threatened. In most cases though they just get shot so you won’t find birds in there.

So, back to our question, where do we find birds?  I’ll try to list down the usual places where one can find birds.

  1. Where There Are Trees/Vegetation. It is almost guaranteed where there are significant clusters of trees and vegetation, there could be birds. Trees can be found almost everywhere but seeing one in significant clusters are becoming of an issue nowadays due to unnecessary tree-cutting. Forests still do represent an ecosystem where trees abound. One can go to various types of forests to photograph birds. There are lowland forests, some can be found in higher altitude, and some forests dot our coastlines. Some are natural and some are man-made. All these types of forests provide an ecosystem where different kinds of birds can thrive.
  2. Near Bodies of Water. Another potential ground where one can find birds are areas near bodies of water such as streams, rivers, lakes, shores, etc. Like any other living things on this planet, birds need water. Even small potholes of water offer birds some comfort. The likelihood of seeing a good number of bird species increases when these bodies of water are near trees and vegetation, or food sources.
  3. Near Food Sources. Another good site where birds congregate are areas where there are enough food for them to thrive. Some birds thrive on nectars so where there are flower, they are also there. Some birds eat insects, so where there are lots of insects, birds could be there. Some birds love fish, and one knows where to find these marine beings. You just have to know a birds’ diet and find those places where they could fatten their bellies.
  4. Where They Can Build Their Nests. Now this is a more difficult place to find as most birds hide their nests from anyone’s view. Some build their nests on rocks, some on the ground, some on the sand, some on a branch of a tree, and some even build on man-made structures such as tall buildings. Some birds don’t even build their nests in the country. J One needs to study more about birds to be able to locate their nests.
  5. Where Humans Are. Some birds have adapted well to people. And where people go, they go there too. These birds usually scavenge human leftovers and wastes.

Virtually, one can find birds almost everywhere but one needs to know certain bird characteristics to be effective in finding specific species.

Wild Bird Photography in the Philippines – Part 1
Wild Bird Photography in the Philippines – Part 3

Check out my album of Philippine birds!

Why Am I Still Stuck with T-SQL?

Not a day pass without something new coming out from software companies like Microsoft. And it has been a challenge to keep up for application developers like me. I happen to start my all-Microsoft stack effort during the heydays of Visual Basic 3 and Access. Prior to that, I was a Borland kind of kid mesmerized at how neat my code was when printed on reams of continuous paper.

I was really fast then in absorbing new software technologies for application development and related products but I am no longer a kid I used to be. I am now a slow slumbering oldie when it comes to absorbing software development technologies. I actually envy those guys now who are doing and using technologies that just came out of the baking oven. I feel they are so smart to figure out new things so fast. Isn’t that great?

And every time I get to mingle with these developers, they often wonder why I am still stuck with Transact-SQL (T-SQL). Every time .. and it never fails. It now makes me wonder why I am stuck with T-SQL when there are a lot of new alternatives. This makes me beg to question if I am still relevant with the times. Well, let’s see.

Am I in? Or Out?

My first serious brush with T-SQL was when I was contracted to develop a school library system. I thought of using VB3 + Access but being a fast kid then, I opted to choose what was the latest and ended up using VB4 and SQL Server 6.5. I was mostly doing VB code and using DAO/ADO to connect to my databases and still doing client-side cursors when going through my tables here and there. I had trouble adapting to sql’s set-based processing and mindset with my Clipper-heavy and Access background. In no time, I was able to absorb T-SQL and began moving some of my data manipulation code to stored procedures.

When VB5 came out I decided to upgrade the school library application with an improved database structure with all data manipulation in stored procedures. This time, no more non-TSQL code for my data. I was able to re-use some TSQL code taken from the app’s previous version.

VB6 came out and Microsoft touted a better data access component in RDO. Around that time, I was able to get more libraries to use my system so I virtually kept up with anything new from Microsoft. I upgraded the front-end portion of my library system while I was able to re-use all my stored procedures.

Shortly after, the Web’s irresistible force dawned on me and I took up ASP and VB Script and migrated a portion of my library application to face the web. During this time, I also upgraded to SQL Server 7.0. I had some inline SQL codes which were prone to SQL Injection but I was able to retain all my stored procedures.

When .NET came out, I had my library system upgraded to an entirely new schema, platform and language (ASP.NET, Windows Forms, ADO.NET, SQL Server 2000/2005). This time, I hired somebody else to code it for me.

Never Obsolete

With all the changes I made to the library system, the only technology that remained constant was T-SQL. In most cases, I was able to re-use code. In all cases, I was able to take advantage of the benefits of re-using my T-SQL experience .. all these while I managed to bulk up my knowledge on T-SQL.

VB4 is gone; my T-SQL is still here. VB5 is gone; my T-SQL is still here. VB6 is gone; my T-SQL is still here. ASP is gone; still my T-SQL is here. DAO, ADO, and RDO are gone but my T-SQL remained. I moved from VB to C#, yet I am still using T-SQL.

Today we have ADO.NET, we have LINQ. Soon they will be gone (I have heard LINQ-To-SQL is now deprecated). And tomorrow I can probably move from ASP.NET to Silverlight or some else new, or from ASP.NET to PHP, but I have the feeling I still be using T-SQL. Microsoft is even realizing in going back to something as basic as tables and queues with Azure but it can’t ignore T-SQL, thus we have SQL Azure.

Betting My Future

I am inclined to think that my investments on T-SQL have already been paid back immensely and I am still reaping its benefits. With the advent of the relatively new cloud computing, and having various players offering various cloud computing technologies and services, I can’t help the urge in identifying which part of the cloud computing technology stack will survive against the onslaught of constant change and would manage to stay relatively stable. I am afraid some of our current investments to other technologies wont be as useful in the cloud but I am betting my future once again with SQL.

What about you?

**************************************************
Toto Gamboa is a consultant specializing on databases, Microsoft SQL Server and software development operating in the Philippines. He is currently a member and one of the leaders of Philippine SQL Server Users Group, a Professional Association for SQL Server (PASS) chapter and is one of Microsoft’s MVP for SQL Server in the Philippines. You may reach him by sending an email to totogamboa@gmail.com