Monday, June 22, 2009

A Beginner’s Guide to Virtualizing Exchange Server, Part 2

It isn't easy to measure the consumption of physical resources by servers in virtual machines, since each partition has its' own virtualised view of system resources. Not only that, but there is a subtle difference between virtual processors and physical processors. Brien Posey explains the special performance counters that can be used to get an accurate assessment, and goes on to describe how to test your physical servers to see if they are good candidates for virtualization.

If you read my first article in this series, then you got something of a crash course in Hyper-V’s architecture. Of everything that I covered in that article though, there are two main points that you need to keep in mind:

  1. When it comes to virtualizing Exchange Server, it is critical that you monitor resource consumption so that you can ensure that there are sufficient hardware resources available to effectively service Exchange and any other virtual machines that may be running on the server.
  2. Each of the various resource monitoring mechanisms that I showed you tells a completely different story regarding how much of the server’s physical resources are actually being consumed.

In other words, it is important to find out how much of the server’s resources are being consumed, but you are not going to be able to do so in the usual way.

Thursday, June 18, 2009

Use Backup/Restore to Minimize Upgrade Downtimes

As a SQL Server professional, at some point in your career, you will need to upgrade between versions of SQL Server, or move a database from an older server onto a newer one. There are quite a few different ways to go about doing this, the most common being; Detach/Copy/Attach and Backup/Restore. When downtime is acceptable, either of these methods can get the job done, the only caveat being that if you are performing a upgrade to a newer version of SQL Server and you decide to use Detach/Copy/Attach to upgrade the databases that you still should take a backup of the database before moving it so that you have a point to fall back top. Once you attach the database files to the newer version they will be upgraded internally and will no longer be able to be used on the older version.

If time is of the essence during the migration, and downtime must be minimized, the best approach will depend on the size of the database being upgraded. For a database that is under 4GB in size, it may be acceptable to still do a Detach/Copy/Attach move of the database, but for a database that is 40GB in size, the time it takes to copy the files to the newer server could exceed the allowable downtime for the system. If the database is 400GB, it will most certainly take to long to move the database by Detach/Copy/Attach. In this case the best path to migration/upgrade is to work with Backup/Restore.

So how do you go about doing this?

Thursday, June 4, 2009

Calling a Web Service from within SQL Server

More and more shops are implementing web services. Doing this provides an architecture that allows applications to consume services to retrieve data. These services could be within your own organization or from a business partner. One of the problems you might run into when building applications that consume web services is how you can use web services data within a SQL Server instance. One of the reasons you might want to do this is so you can join a record set that is returned from a web service with one of your SQL Server tables. This can easily be done within an application, but how do you do this within a stored procedure that only runs within the context of SQL Server. In this article I will discuss one approach for doing this.

Backup Monitoring and Reporting

Database recovery is a core task of database administration and is defined as executing the tasks necessary to backup, save, retrieve and restore databases from hardware, software and user error. Your database recovery procedures should be documented and regularly tested by restoring either a full or random sample of backups. A key part of database backups is monitoring whether backups are in fact occurring. A common mistake in backup monitoring is only checking for backup success or failure when there are in fact three possible outcomes; success, failure and nothing. What is nothing? Nothing is what happens when a database or entire SQL instance for that matter is not configured for backups or the SQL Agent job or Windows services which are often a part of 3rd tools are in a hung state. In order to achieve effective backup monitoring you need to look at the non-event of a lack of backup rather than failed backup messages. Regardless of how a database is backed up, whether through SQL Server native backups or any one of a number of commercial backup software a row is written to the msdb.baclkupset table. Knowing this information, you can create daily monitoring and backup reports.