Uncategorized

SQL Server 2005 Book Reviews

Now that my SQL Server 2005 presentation is over, I have time to write reviews of the books I used to gather my material! My two primary resources were:

As you have probably noticed, both books have a developer focus. The reason for this is twofold. First, the presentation was for the Fort Wayne .NET user group, a group consisting primarily of developers. The other reason is that they were all I could find on the subject!

Both of these books were fabulous, particularly considering the unpredictable nature of beta software. The books provide excellent preliminary documentation for the new features of SQL Server 2005 including the .NET Runtime Host extensions and Broker Services.

In addition to being well written and full of examples, these books are a great compliment of each other. The Addison-Wesley book shines in the area of defining the syntax of the new T-SQL extensions and providing examples. The MS Press book does not lay out the syntax but provides better examples and often includes a diagram to accompany a discussion.

As is expected with any literature based on a beta product, there are a few issues with the books. Foremost, the books make reference of the System.Data.SqlServer namespace. As of December of 2004, this namespace has been removed and its functionality has been moved back into System.Data.SqlClient. Additionally, The MS Press book makes use of a “:” syntax for creating an object based off of a member of an assembly. For consistency, the “:” syntax has been replaced with a “.” syntax.

If I had to suggest one of these books, my vote would immediately go to the MS Press book. The Addison-Wesley book tends to be wordy and drawn out. The MS press book is half the length of the Addison-Wesley book and covers the same information almost in the same level of detail. The MS Press book is also an easy read and its use of diagrams makes scanning for information simple. For anyone serious about getting involved with SQL Server 2005 development though, I would recommend reading these books together to get a better view of the overall picture.

<SUBJECT>At a Loss</SUBJECT>

As I indirectly mentioned in my previous post, I will be giving a presentation on SQL Server 2005 at the March .NUFW meeting. I have been spending a great deal of time preparing for this presentation and have most recently been studying the new XML capabilities including extensions to the FOR XML clause, the inclusion of the XML data type, and the native support for XQuery.

All of these enhancements seem exciting but I am at a loss as to why much of this is being natively supported in an RDBMS, especially one that allows for embedded CLR assemblies. Don’t get me wrong, I think that XML is a great technology and have used it many times but like everything else, it is a tool and needs to be used appropriately.

I fully understand wanting to manipulate XML as close to the database as possible to reduce the amount of network traffic typically required to build a dynamic XML document. As I mentioned above, SQL Server 2005 supports embedded CLR assemblies. One of the new features includes in version 2.0 of the .NET Framework is the SqlServer namespace that provides classes for querying and manipulating data against the context connection. This provides an easy and familiar mechanism for building and parsing XML documents inside of the database because the classes defined in the System.Xml namespace are accessible!

I even understand the need for an XML data type so that XML documents may be passed into or returned by stored procedures or user defined functions for parsing or building, respectively. What I do not understand is storing XML in a relational database!

All of the arguments I have read or heard for storing XML in a relational database fall into a couple of categories:

  • XML is universal because it uses UNICODE.
  • XML allows for native mapping of properties to objects, thus, it provides a simple mechanism for persisting and recovering the data related to an object.

These arguments have not convinced me that storing XML in a relational database is a good practice. The UNICODE argument falls apart immediately in that if UNICODE is needed, values can [typically] be stored natively in the database through the use of an NCHAR or NVARCHAR data type (SQL Server and Oracle support this). If natively storing the value as UNICODE is not an option, it can be converted later.

As for the second argument, I will agree that XML does allow for an easy mechanism for persisting and recovering object related data. This is also known as laziness! I would even argue that this approach is lazy even when the data is “guaranteed” (is anything ever certain?) not to change.

I have never seen an XML document that wasn’t able to be easily represented in a relational format. It is true that many documents require many tables but the point is that it can still be done. In many aspects, it is also more efficient to store data values individually spread across multiple tables than it is to store a single XML document (or fragment) in a single column.

Parsing the data values into individual columns across an appropriate number of tables provides many advantages to storing an entire XML document in a single column.

  • When storing raw XML, a great deal of space is wasted just to store tags and attribute names. Depending on the structure of the XML, this can quickly add up since it will be extended across all of the rows in the table storing the XML.
  • What if a single value needs to be changed? How about adding a new Item? If the XML document is large, this can be an expensive operation since the entire column value will need to be replaced. (It should be noted here that SQL Server 2005 does support a proprietary XML Data Manipulation Language (DML) but the XQuery specification does not currently support this functionality so it is likely to change in future versions.)
  • What if a different view of the data is needed? Suppose the XML represents an article and only the title is needed? The entire document would need to be read and parsed to extract a single value. The title itself could be broken out into its own column but then other problems may exist:
    1. Multiple copies of the same data if the title is left in the XML. This is wasting space and adding complexity because the values need to be kept in sync.
    2. The title could be removed from the XML and stored in its own column. This is eliminating the purpose of storing the XML in the database in the first place because the title will either need to be added back into the XML for delivery to the client or two queries will need to be run!
  • Another gripe about storing raw XML in a database is that it is difficult to query against. An example is if the XML column stores documents that describe books and a user wants to find all of the books written by “Bob”, either every column needs to be returned and searched using XPath or XQuery (SQL Server 2005 provides native support for XQuery but the syntax is an awkward combination of T-SQL and FLWOR)
  • The XML specification does not require values to be strongly typed. Strongly typed data ensures consistency. Storing the values in individual columns that are appropriately typed will add an extra layer of protection to the data.
  • Finally, a “typical” XML document does not represent a scalar value but rather, a complex type.

It is for these reasons that I do not support storing XML in a relational database even for document-centric type applications. As I mentioned earlier, I have no issue with XML itself. I believe that it is a great tool and deserves a place in every developer’s toolkit. Like all tools though, it needs to be used appropriately in order to be effective. If my logic is flawed or I have missed something, please feel free to try to change my mind.

.NUFW Plug

The .NET User Group of Fort Wayne (.NUFW) meets on the second Tuesday of every month. Meetings are free and open to the public so if you are in the area and wish to attend, please stop by.

Upcoming Events
  2/8/2005 3/8/2005
Topic Message Oriented
Architecture
SQL Server 2005 (Yukon)
Overview
Speaker Mike Amundsen [INETA] Dave Fancher

Please refer to the .NUFW Web site for meeting time and location.

A Matter of Great Importance

It has been a while since my last post. Work and the holidays have kept me pretty busy. During the past few weeks, I have had some time do so some of the reading that I have been meaning to do. Based on the content of the various books and journals I have been reading, it seems as though there is a great, ongoing debate as to which part of an application is more important: the user interface (UI) or the back-end code (for the purposes of this entry, I am including any data storage mechanisms and logic code including databases and stored procedures).

To me, this debate seems pointless. The answer to the question of “which is more important?” is really quite simple…both parts are equally important!

The equality of importance is apparent when one examines the true purpose of a program. The true purpose of a program is to make some task easier and more efficient for someone. It is extremely important to never lose sight of this purpose since doing so will negatively impact the quality of the end product.

Both parts of a program directly impact the overall user experience. Obviously, the UI is what the end user sees and interacts with. If the UI is cumbersome and/or difficult to learn, the task has not been simplified and may in fact have actually become more difficult. User interfaces must be intuitive.

On the other side of the coin, if the back-end performs poorly (including stability issues), the user becomes upset because although the task may have been simplified, it may not be more efficient. Even the perception of poorly written code will ultimately result in lower user satisfaction. Therefore, it is important to minimize the level of poorly written code in order to minimize areas of poor performance and the perception of poor performance.

The most elegant user interface ever designed cannot save an application with a poorly written back-end. The most efficient logic code ever written will not be used or appreciated if the user cannot figure out how to exploit it because of a cumbersome UI.

It is for these [brief] reasons that I believe that equal time should be spent on the design and implementation of both the user interface and back-end portions of any program. Only when all components have been well designed and developed will user satisfaction reach its maximum potential.

Caution: Education Ahead!

Ok, so I admit that I’m a little behind the curve on some of the technology that is available right now but all that is about to change. For my birthday, my wife presented me with a new Seagate 80GB hard drive which I am proud to say, installed without any problem. As I was installing the drive, I even decided that with the CD burner and a 256MB flash drive, I no longer needed the floppy drive installed so I’m finally running without a 3.5″ drive on all of my computers.

The reason I asked my wife for a new hard drive is so I could set up my system for dual boot with a developer license of Windows Server 2003 Enterprise Edition and load other software such as SQL Server 2005 Beta 1 and Visual Studio 2005 Beta 2 so I could get myself up to speed on the upcoming technology. I don’t have everything installed yet since I still have to download the MSDN portion of Visual Studio but what I have installed so far impresses me beyond my wildest dreams.

Even though I have been reading some about SQL Server 2005 tonight was my first exposure to it and therefore, it was my experience with SQL Server Enterprise Studio. I must say that so far I like what MS has done. The whole Visual Studio-like interface seems to be extremely intuitive and easy to use from what I have looked at. I know that I have barely scratched the surface of the new tools and I am also left wondering how many DBAs will like the extreme change.

In all honesty, tonight was also my first exposure to Windows 2003 Server and I am also impressed with it. I know that the product has been available for some time but I have not had access to any resource to dig in and fiddle around. I am looking forward to playing around with IIS6 and many of the other enhancements over Windows 2000 though I’m really primarily interested in SQL Server 2005.

As I was opening the box in preparation to install the new drive, I asked my wife if she realized that the present meant I was going to disappear into my home office more than I usually do since she was opening the door for me to install a shload of new “toys,” She said she did but we’ll see! As I learn more about SQL Server 2005 and Visual Studio 2005 I’ll be sure to post it here.

Recent Events

It has been a few days since my last post and a lot has been going on. Over the past week I have:

  • Found a host and launched a web site for Springfield Publications
  • Attended an MSDN event in Greenwood, Indiana
  • Attended the year-end/holiday event for my organization
  • Started my holiday shopping

The event most relevant to this blog is the MSDN event, of course! This event focused on object oriented (oo) concepts, the Microsoft MapPoint Web Services, optimizing ASP.NET applications, and ASP.NET 2.0 Membership and Personalization. Overall, I really didn’t get much out of the event. To me, the most valuable session was the one about optimizing ASP.NET applications.

The OO session focused on the concepts as they related to VB.NET and were presented in a simplistic manner. In all honesty, I found it a little hard to believe that in a theatre full of .NET developers, there were plenty of people that were unfamiliar with the pillars of OO development. It also seemed as though much of the emphasis was placed on abstraction and encapsulation as opposed to inheritance and polymorphism, the main principles on which the .NET framework is based!

The MapPoint Web Services session seemed kind of pointless and seemed like even more of a sales pitch than anything else. Are there applications for the Web Services? Sure, but the many applications that need some mapping functinality will be better served by linking to MapQuest. The bottom line from this session is that MapPoint is expensive, has inflexible licensing, and, although powerful, is probably overkill for many applications.

As I mentioned earlier, the best session (and actually the reason I attended this event) was the session regarding ASP.NET application optimization. At first, I was disappointed with this session since much of the first 45 minutes focused on the tools and how to use them. I am a programmer and, naturally, a tinkerer. I want to figure out the tools on my own (and trust me, they’re not that complicated to use), not have every feature demonstrated. Some nice tips such as running the tools on a separate box than where the site is hosted were covered but I was looking for tips on how to actually improve performance.

Fortunately some tips were eventually discussed. Unfortunately most of them were ones that could be found in any ASP.NET book. Optimizations discussed included:

  • Using caching
  • Using connection pooling
  • Open late/close early technique for database connections

All of the suggestions were nice but again, most of them could be found in any ASP.NET book. As always, the thing to keep in mind with these is that they are only as useful as your application allows them to be. If your application uses frequently changing data for instance, caching won’t be of much use to you for improving the performance of your application.

A positive note about this session is that the presenter (Jacob Cynamon) continually emphasized the importance of designing performance into the application from the start. Poor performance should not be acceptable for any application. Applications should be tweaked to ensure the most efficient execution path is executed rather than a method that is simply easier to maintain. I am not arguing that maintainabilityis not important but I am arguing that usability is and performance directly impacts usability. Making users wait on your application because of poorly written, inefficient code is unacceptable. Of course, the amount of acceptible wait time varies by task but making users wait longer than necessary is a sign of low quality.

Moving on from my side rant, I did not stay for the final session. I originally intended on sticking around but decided that I will look into ASP.NET 2.0 on my test workstation when I get around to setting it up rather than sit through a presentation on beta software. I’m sure that many of the new features in ASP.NET 2.0 will make my life easier but for now, I am not interested. When the next version is closer to release, I’ll look into this in more detail.

That really seems to wrap it up for the overview of the MSDN Event. Until next time, happy coding!

Any Command Line Lovers Out There?

To be honest, much of the buzz about Longhorn hasn’t really gotten me very excited. All that I’ve been really interested in is WinFS but I haven’t spent much time looking into it yet. Tonight I ran across something called “Monad” which managed to get my attention.

Announced at the Professional Developer’s Conference, Monad is a replacement for cmd.exe. Personally, I love using the command line. I’ve always found that most administrative tasks are much easier to perform using a command line interface than to dig through layer upon layer of windows and dialog boxes just to click one check box. Sure, some things are clumsy in the command line but at least its fast, there’s no question about what is actually happening, and most important, there’s NO DIGGING! I even extend this to database administration where I prefer to use Query Analyzer to Enterprise Manager for most SQL Server administrative tasks.

You can imagine my dismay when I read that my favorite tool in Windows administration was being replaced. “What could they possibly do to it?” I wondered.

Long story short, Microsoft has finally decided to catch up with the rest of the operating system world and switch to a pipeline based model. The nicest thing about Monad is that rather than being an ASCII based pipeline, its a .NET object based pipeline. What does all this mean? It means that administrative tasks are going to get a whole lot easier!

Perhaps one of the nicest examples I’ve seen regarding how Monad simplifies administrative tasks comes from an article on TechWorld. In the example, the task of enumerating processes to determine how much memory is being used by each process in Windows 2003 (using some third party tools) is compared to the same task on Monad. Basically, rather than writing a custom script in Python, a new “get-command” command has been added which returns a collection of System.Diagnostics.Process objects which can be piped to Excel, an HTML file, or a .NET Grid control. The results can even be piped to further commands.

Maybe Longhorn isn’t going to be so bad afterall!

For more information about Monad, Google “Monad

Stored Procedures in Access

A common subject on the various newsgroups is that of stored procedures in Access. Until tonight, I was under the impression that stored procedures could not be used in Access except through an Access Project or a pass-through query. I was right to an extent. Some Data Definition Language (DDL) commands (including CREATE PROCEDURE) are valid commands when executed against an Access database when connecting to the database via the Jet 4 OleDb provider.

Despite claims that stored procedures could be created and used directly by Access I had never seen a functional example of an Access stored procedure. To those that I have debated with regarding this, I concede that CREATE PROCEDURE works with Access; Access does indeed support an extremely limited type of stored procedures starting with Access 2000.

There is a great deal of documentation floating around the internet pertaining to this subject. Nearly all of the documentation states that Access does not support stored procedures. Of the documentation that claims that Access does support stored procedures, most of it provides no example of how to create and execute them.

When I saw yet another post on this subject tonight, I began typing the typical reply of “You can’t do it, Access doesn’t support it,” but also continued looking for information to back up my claim. Everything I found in previous searches on this subject fit into the categories I described above. Tonight I stumbled across a fairly recent newsgroup discussion and a Microsoft Knowledge Base article both dealing with this subject directly.

The CREATE PROCEDURE command in Access cannot be used directly from the IDE. It must be executed over a separate connection using the Jet 4.0 OleDb provider.

The examples provided in the links above are specific to the VB Modules in Access. I decided to carry this a bit further and try this from within the .NET Framework. Below is a simple Console application that I wrote to test this. The code isn’t anything complicated; in fact, it should be pretty simple for anyone familiar with the OleDb namespace classes.

using System;

using System.Data;
using System.Data.OleDb;
using System.Text.RegularExpressions;

namespace AccessSPTest
{
public abstract class AccessTests
{
public static void Main()
{
using (OleDbConnection cn = new OleDbConnection(
@"Provider=Microsoft.Jet.OLEDB.4.0;" +
"Data Source=c:\inetpub\wwwroot\recipemanager\recipe.mdb"))
{
byte option = GetOption();
while(option!=7)
{
switch(option)
{
case 1:
CreateSelectProcedure(cn);
break;
case 2:
DropSelectProcedure(cn);
break;
case 3:
ExecuteSelectProcedure(cn);
break;
case 4:
CreateInsertProcedure(cn);
break;
case 5:
DropInsertProcedure(cn);
break;
case 6:
ExecuteInsertProcedure(cn);
break;
case 7:
break;
default:
Console.WriteLine("Invalid Option.\n");
break;
}

option = GetOption();
}
}
}

private static byte GetOption()
{
Regex re = new Regex(@"^\d{1}$");
byte option = 0;
while(option==0)
{
Console.WriteLine("Select an option:");
Console.WriteLine("\t1.) Create Select Procedure");
Console.WriteLine("\t2.) Drop Select Procedure");
Console.WriteLine("\t3.) Execute Select Procedure");
Console.WriteLine("\t4.) Create Insert Procedure");
Console.WriteLine("\t5.) Drop Select Procedure");
Console.WriteLine("\t6.) Execute Insert Procedure");
Console.WriteLine("\t7.) Exit");
Console.Write("Enter a number (1-7): ");
string input = Console.ReadLine();

if(re.IsMatch(input))
option = Convert.ToByte(input);
}
return option;
}

private static void DropSelectProcedure(OleDbConnection conn)
{
conn.Open();
try
{
OleDbCommand cmdDrop = new OleDbCommand(
"DROP PROCEDURE sp_Ingredient2",
conn);
cmdDrop.CommandType = CommandType.Text;
cmdDrop.ExecuteNonQuery();
}
catch(Exception ex)
{
Console.WriteLine("Procedure could not be dropped.");
Console.WriteLine(ex.Message);
}
finally
{
conn.Close();
Console.WriteLine(String.Empty);
}
}

private static void CreateSelectProcedure(OleDbConnection conn)
{
conn.Open();
try
{
OleDbCommand cmdCreate = new OleDbCommand(
"CREATE PROCEDURE sp_Ingredient2(prmIngredientID int)" +
" AS SELECT * FROM Ingredient WHERE IngredientID = prmIngredientID",
conn);
cmdCreate.CommandType = CommandType.Text;
cmdCreate.ExecuteNonQuery();
}
catch(Exception ex)
{
Console.WriteLine("Procedure could not be created.");
Console.WriteLine(ex.Message);
}
finally
{
conn.Close();
Console.WriteLine(String.Empty);
}
}

private static void ExecuteSelectProcedure(OleDbConnection conn)
{
int id = 0;
Regex re = new Regex(@"^\d+$");
do
{
Console.Write("Ingredient ID: ");
string input = Console.ReadLine();
if(re.IsMatch(input))
id = Convert.ToInt32(input);
} while(id == 0);

OleDbCommand cmd = new OleDbCommand("sp_Ingredient2", conn);
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add(new OleDbParameter("prmIngredientID", OleDbType.Integer));
cmd.Parameters["prmIngredientID"].Value = id;

conn.Open();
try
{
using(OleDbDataReader reader = cmd.ExecuteReader())
{
while(reader.Read())
{
// Write the ID and Ingredient Name
Console.WriteLine("{0}\t{1}", reader.GetInt32(0), reader.GetString(1));
}
}
}
catch(Exception ex)
{
Console.WriteLine("Could not execute select procedure.");
Console.WriteLine(ex.Message);
}
finally
{
conn.Close();
Console.WriteLine(String.Empty);
}
}

private static void DropInsertProcedure(OleDbConnection conn)
{
conn.Open();
try
{
OleDbCommand cmdDrop = new OleDbCommand(
"DROP PROCEDURE sp_InsertIngredient", conn);
cmdDrop.CommandType = CommandType.Text;
cmdDrop.ExecuteNonQuery();
}
catch(Exception ex)
{
Console.WriteLine("Procedure could not be dropped.");
Console.WriteLine(ex.Message);
}
finally
{
conn.Close();
Console.WriteLine(String.Empty);
}
}

private static void CreateInsertProcedure(OleDbConnection conn)
{
conn.Open();
try
{
OleDbCommand cmdCreate = new OleDbCommand(
"CREATE PROCEDURE sp_InsertIngredient(prmIngredientName varchar(25))" +
" AS INSERT INTO Ingredient (Description) VALUES (prmIngredientName)"
, conn);
cmdCreate.CommandType = CommandType.Text;
cmdCreate.ExecuteNonQuery();
}
catch(Exception ex)
{
Console.WriteLine("Procedure could not be created.");
Console.WriteLine(ex.Message);
}
finally
{
conn.Close();
Console.WriteLine(String.Empty);
}
}

private static void ExecuteInsertProcedure(OleDbConnection conn)
{
string value = String.Empty;
do
{
Console.Write("Value to insert: ");
value = Console.ReadLine();
} while(value == String.Empty);

OleDbCommand cmd = new OleDbCommand("sp_InsertIngredient", conn);
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add(new OleDbParameter("prmIngredientName",
OleDbType.VarChar, 25));
cmd.Parameters["prmIngredientName"].Value = value;
conn.Open();
try
{
using(OleDbDataReader reader = cmd.ExecuteReader())
{
while(reader.Read())
{
// Write the ID and Ingredient Name
Console.WriteLine("{0}\t{1}",
reader.GetInt32(0), reader.GetString(1));
}
}
}
catch(Exception ex)
{
Console.WriteLine("Could not execute insert procedure.");
Console.WriteLine(ex.Message);
}
finally
{
conn.Close();
Console.WriteLine(String.Empty);
}
}
}
}

Note that this code does not check if an object exists before the drop statement is called. Access keeps track of objects in a system table called MSysObjects. Checking if an object exists requires some up front configuration.

The first step towards verifying the presense of an object is to make sure that system tables are visible in Access. If they are not visible, go to Tools / Options… and check the box for system tables in the show group on the View tab. Apply the change.

The second step is to ensure that the user account has read access to the design and data of the MSysObjects table. By default this access has not been granted. To enable the access, go to Tools / Security / User and Group Permissions… Select the appropriate user account and ensure that “Object Type” is set to “Table” then click the MSysObjects table from the “Object Name” list. Click “Read Data,” “Read Design” should be checked automatically. Apply the change.

At this point, the permissions to access MSysObjects should be set properly. To check if an object exists, execute the following command:

SELECT COUNT(*) FROM MSysObjects WHERE Name=?

Executing the above statement through an OleDbCommand using ExecuteScalar() will return an integer of 0 or 1. If the value is 1, the object exists.

OleDbCommand cmd = new OleDbCommand(

"SELECT COUNT(*) FROM MSysObjects WHERE Name=?", conn);
cmd.Parameters.Add(new OleDbParameter("param1", OleDbType.VarChar, 255));
cmd.Parameters["param1"].Value = object_name;
conn.Open();
int matches = (int)cmd.ExecuteScalar();
if(matches==1)
Console.WriteLine("Object Found.");
else
Console.WriteLine("Object Not Found.");
conn.Close();

The DDL seems to follow the SQL standards pretty well in terms of syntax but that is where the commonality stops. Recall that I said that these stored procedures are extremely limited. To see how limited these so called “stored procedures” truly are, a look under the hood is necessary.

Like most modern database management systems, Access database objects are divided into a number of categories including Tables, Users, etc… For the purposes of this discussion, I will limit the focus to Access’ Query type.

Whenever an object is created in Access, a row is added to the MSysObjects system table. The MSysObjects table (which we made visible earlier) maintains configuration and reference data about each object including its type. A value of “5” in the Type column indicates that the object represented by the given row, is a stored query.

Any object created with the CREATE PROCEDURE command has a type of 5. This tells us that a procedure was not created but a QueryDef was created instead. In fact, the online Office reference makes no attempt to hide that fact as is indicated in the following excerpt. I’ve provided the link for reference.

procedure

Similar to a view except that the SQL syntax can allow for SELECT statements (row returning queries) to pass parameters and allow for DML (non-row returning queries) UPDATE, INSERT, DELETE, and SELECT INTO. Literally the same as a querydef, except that it can be created from SQL syntax without the use of the DAO object model.

In Access 10 (I haven’t checked later versions), these objects are listed in the Query list and are displayed with the corresponding “Action Query” icon. We can also look at the SQL definition of one of the objects for additional confirmation.

PARAMETERS prmIngredientName Text ( 255 );

INSERT INTO Ingredient ( Description )
SELECT prmIngredientName AS Expr1;

What does all of this looking under the hood mean? It means that the objects that are created with the CREATE PROCEDURE command are not stored procedures as enterprise developers know them. They provide no support for any type of flow control statements or other logic; they are simply single statement queries that are capable of accepting parameters. This is nothing more than a roundabout way to create objects that are accessible from the Query menu.

This being said, it would be nice to be able to execute a statement such as:

IF EXISTS(SELECT * FROM MSysObjects WHERE Name="objName") DROP PROCEDURE objName

but statements like that just won’t work due to the limited functionality of Access’ SQL processor.

More often than not, Access is not the proper tool for the job. Don’t get met wrong, Access is great for small, workgroup based solutions. Solutions requiring more power (such as SQL Server or Oracle) are needed when the problem being addressed extends much further than that. Misleading statements such as “Access supports stored procedures” only serve to compound scalability and performance issues when inexperienced developers are faced with the decision of which DBMS to use.

Inheritance, Polymorphism, and Casting

In browsing various newsgroups, I found a number of questions pertaining to inheritance, polymorphism, and casting (typically all in the same post). It seemed like quite a few people out there are confused about these concepts and how they are applied so hopefully this entry can clear things up.

Inheritance allows us to extend the functionality of a class (base class) to provide more specialized functionality in a derived (child) class. This is accomplished by: providing new functionality and/or overriding (or hiding) existing functionality.

Polymorphism, on the other hand, can almost be thought of as reverse inheritance; rather than adding functionality by becoming more specific, functionality is removed by becoming less specific. This allows for multiple, related classes that provide similar functionality to be use in a generic sense. For example: a class may be substituted with its base class in order to write code that can be used by all of the specialized classes but is not tied to the specialized classes.

Casting instructs the runtime to “convert” a variable of one type into another type. What is actually happening is that the program is saying “for the object located at X, use the memory structure for type A rather than B,” A common example of casting comes from ADO.NET. When using the ExecuteScalar() method of a Command object to retrieve a count of rows, it is necessary to cast the result from Object to a numeric type: int cnt = (int)cmd.ExecuteScalar();

Casting operations can be performed implicitly or explicitly. Implicit casts (int to long) are performed automatically by the CLR when the allocated space needs to be increased. Explicit casting operations (long to int) must be specified in the code.

Note that casting requires compatible types meaning that a conversion operator must be defined. Casting from a class to its base class may be performed implicitly while casting from a base class to a derived class must be performed explicitly.

Consider the following code that demonstates these principles. To run this code, create a new Console project in Visual Studio and copy/paste it into the automatically created file. (for command line compilation, copy/paste the code into notepad, save as example.cs, and execute csc example.cs at the command line.)

/* Note that this code does not necessarily use best practices.

It is included for illustration purposes only. */

using System;

namespace InheritanceAndPolymorphism
{
public interface ISpeaks
{
void Speak();
}

public abstract class Animal
{
public string Name;
public double Weight = 0;
public double NumberOfLegs = 4;

// All animals sleep
public void Sleep()
{
Console.WriteLine("Zzzzzz.....");
}

// All animals need to eat
public virtual void Eat()
{
Console.WriteLine("Animal is eating");
}
}

public class Cat : Animal, ISpeaks
{
public void Meow()
{
Console.WriteLine("Meow!");
}

// Required by ISpeaks
public void Speak()
{
Meow();
}

public void Purr()
{
Console.WriteLine("Purrrrrrrrr");
}

public override void Eat()
{
Console.WriteLine("Cat is eating");
}
}

public class Dog : Animal, ISpeaks
{
public void Bark()
{
Console.WriteLine("WOOF!");
}

// Required by ISpeaks
public void Speak()
{
Bark();
}

public void WagTail()
{
Console.WriteLine("*WAG* *WAG* *WAG*");
}

public override void Eat()
{
Console.WriteLine("Dog is eating");
}
}

public class StartUp
{
public static void Main()
{
Cat c = new Cat();
Dog d = new Dog();

Sleep(c);
Sleep(d);
Eat(c);
Eat(d);
Speak(c);
Speak(d);
Speak2(c);
Speak2(d);
MakeHappy(c);
MakeHappy(d);
}

private static void Sleep(Animal a)
{
/* The animal class defines what must occur for a generic animal to sleep.
The Sleep() method is not marked as virtual so we are not assuming that the method
will be overridden (it may be hidden however). In this case, the following line
will call the Sleep() method of the animal class.
*/

a.Sleep();
}

private static void Eat(Animal a)
{
/* The animal class defines what must occur for a generic animal to eat but
Eat() is marked as virtual so it can be overridden by derived classes. In this
case, the following line will call the appropriate Eat() method based on the type
that is passed in to "a." If Eat(new Dog()) is called, then Dog.Eat() is executed.
*/

a.Eat();
}

private static void Speak(Animal a)
{
/* The animal class doesn't define the sound that an animal should make so
we need to determine which kind of animal we're working with.
*/

if(a is Cat)
((Cat)a).Meow();
else if(a is Dog)
((Dog)a).Bark();
}

private static void Speak2(ISpeaks s)
{
/* The interface ISpeaks requires that its implementors include a method
called Speak(). No other functionality is available.
*/

s.Speak();
}

private static void MakeHappy(Animal a)
{
/* The animal class doesn't define what animals do when they're happy so we
need to determine which kind of animal we're working with.
*/

if(a is Cat)
((Cat)a).Purr();
else if(a is Dog)
((Dog)a).WagTail();
}
}
}

After compiling and executing the code, the following is displayed:

Zzzzzz.....

Zzzzzz.....
Cat is eating
Dog is eating
Meow!
WOOF!
Meow!
WOOF!
Purrrrrrrrr
*WAG* *WAG* *WAG*

The Animal class provides some basic functionality that each animal will need to have. Eat() and Sleep() methods have been defined along with some basic fields such as NumberOfLegs.

A Cat class and Dog class are also defined and each derives from Animal. The Cat and Dog classes each inherit the functionality provided by the Animal class and extend upon it by adding some new functionality. The added functionality is only accessible from the class in which it was defined. Notice that each class overrides the functionality provided by the Eat() method. The Cat and Dog classes also implement the interface ISpeaks which requires that a method called Speak() be defined.

The StartUp class defines the entry point (Main()) and a few other methods to demonstrate the concepts of inheritance and polymorphism. When the program executes, an instance of the Cat and Dog classes are created. Each of the methods are called for both the Cat and the Dog.

Each of the methods called by the Main() method illustrate polymorphism since each method accepts an Animal (from which Cat and Dog are derived) as an argument. At this point, only the functionality provided by the Animal class is accessible despite the fact that a Cat or Dog was passed to the method. This is illustrated by the Sleep() and Eat() methods of the StartUp class.

The Sleep() method calls a.Sleep(). As indicated in the output [above], when Sleep() is called, the Sleep() method from the Animal class is executed as expected because the functionality provided by Sleep() is inherited by the Cat and Dog classes. Where the polymorphism is truly apparent though is in the Eat() method.

When Eat() is called, the output differs based on the type of animal that was passed in despite the fact that a.Eat() was called. Both the Cat and Dog classes override the Eat() method and provide their own functionality. The CLR knows that, despite being declared as type Animal, the value of the variable “a” is really a more specialized type (Cat or Dog) and calls the appropriate method.

For another example of polymorphism in action, the Speak2() method is included. The Speak2() method accepts a parameter of type ISpeaks rather than Animal. Remember that the Cat and Dog classes both implement the ISpeaks interface and, therefore; are required to define a method called Speak().

When Speak2() is called, the Cat or Dog is implicitly cast to a variable of type ISpeaks. The Speak() method of parameter is called, and like the Speak() method, the CLR knows the actual type of the variable and calls the appropriate method.

Explicit Casting is used by the Speak() and MakeHappy() methods. Additionally, these methods use the C# “is” keyword to determine the actual type that was passed to the method. The Animal class does not provide definitions for Meow() or Woof(), for example, so the variable must be cast back to its actual type in order for the functionality to be utilized.

In conclusion, the concepts of inheritance, polymorphism, and casting can be powerful tools but also reinforce the importance of good application design. The examples shown here barely scratch the surface of the potential of these concepts but should serve as a good foundation for further study.