C#

C# 6.0 – Semicolon Operator

I first read about the proposed semicolon operator a few weeks ago and, to be honest, I’m a bit surprised by the desire for it in C# if for no reason other than that C# isn’t an expression-based language. To me, this feature feels like another attempt to shoehorn functional approaches into an object-oriented language. If I understand the feature correctly, the idea is to allow variables to be defined within and scoped to an expression. The following snippet adapted from the language feature implementation status page shows the operator in action:

var result = (var x = Foo(); Write(x); x * x);

In this code, everything within the parentheses constitutes a single expression. The expression invokes Foo, assigns the result to x, passes x to the Write function, then returns the square of x, and ultimately assigns the square to result. Because x is scoped to the expression, it is not visible outside of the parenthesis. I think this seems a bit awkward in C# and what’s more, I don’t know what value it adds that functions don’t already give us. I haven’t really decided whether the above example is more readable or maintainable than if we’d defined a function called WriteAndSquare.

Interestingly, this capability already exists in F# (albeit in a slightly more verbose form) which isn’t really all that surprising since F# is an expression-based language.

let result =
  let x = Foo()
  printfn "%i" x
  x * x

Even in F# though, I think I’d still prefer factoring out the expression into a function.

C# 6.0 – Null Propagation Operator

[7/30/2015] This article was written against a pre-release version of C# 6.0 but is still relevant. Be sure to check out the list of my five favorite C# 6.0 features for content written against the release!

One of the most common complaints about C# (especially among the F# crowd) is the amount of code required to properly deal with null values. Consider the following type definitions and invocation chain:

Type Definitions

class PriceBreak(int min, int max, double price)
{
  public int MinQuantity { get; } = min;
  public int MaxQuantity { get; } = max;
  public double Price { get; } = price;
}

class Product(IList<PriceBreak> priceBreaks)
{
  public IList<PriceBreak> PriceBreaks { get; } = priceBreaks;
}

Invocation Chain

var minPrice = product.PriceBreaks[0].Price;

As it’s written the preceding invocation chain is full of problems (we’ll ignore the potential ArgumentOutOfRangeException for this discussion). In fact, it can throw a NullReferenceException when any of the following values are null:

  • Product
  • Product.PriceBreaks
  • Product.PriceBreaks[0]

To properly avoid the exception we need to explicitly check for null. One possible approach would look like this:

double minPrice = 0;

if (product != null
    && product.PriceBreaks != null
    && product.PriceBreaks[0] != null)
{
  minPrice = product.PriceBreaks[0].Price;
}

Now we can be certain that we won’t encounter a NullReferenceException but we’ve greatly increased the complexity of the code. Rather than a single line, we have 5 that actually do something and of that, less than half are providing any value beyond making sure we don’t see an exception. C# 6.0’s null propagation operator (?) gives us the safety of checking for nulls while avoiding the verbose ceremony. The result is much cleaner and closely resembles the invocation chain shown earlier:

var minPrice = product?.PriceBreaks?[0]?.Price;

Introducing the null propagation operator allows us to chain each call while short-circuiting any nulls encountered along the way. In this case, the return type is double? (INullable) because anything preceding Price can be null. As such, we can either defer checking for null until we try to use the value or we can simply use the null coalescing operator with a default to coerce the type conversion as follows:

var minPrice = product?.PriceBreaks?[0]?.Price ?? 0;

One limitation of the null propagation operator is that it can’t work directly with delegate invocation. For instance, the following code is invalid:

var result = myDelegate?("someArgument");

Fortunately, there’s a workaround; rather than invoking the delegate directly as shown above, we can use invoke it via the Invoke method like this:

var result = myDelegate?.Invoke("someArgument");

The null propagation operator should eliminate a great deal of null reference checking code although I still believe F#’s approach of virtually eliminating null is superior. Honestly, my biggest concern about the null propagation operator is that it could encourage Law of Demeter violations but that’s a discussion for another day.

Enabling the C# 6.0 Features in the VS “14” CTP

If you’ve been following along with my C# 6.0 features series or have otherwise wanted to experiment with some of the new language features, you may have noticed that they’re not enabled by default. Fortunately, Anthony Green included instructions on how to enable them in a post on the C# FAQ MSDN blog.

To summarize the instructions, the language features are experimental so they must be enabled by manually adding the following to your project file:

<LangVersion>experimental</LangVersion>

This element should be placed within the appropriate PropertyGroup element(s) for your project. Once you’ve added the element, reload the project and enjoy the new features!

C# 6.0 – Exception Filters

[7/30/2015] This article was written against a pre-release version of C# 6.0. Be sure to check out the list of my five favorite C# 6.0 features for content written against the release!

I debated quite a bit with whether I’d write about C# 6.0’s exception filters because honestly, I was indifferent about them at first. What convinced me to include them in this blog series is that the language feature implementation status page lists the feature as done. After playing with the feature a bit, I have to admit that they’re actually pretty nice and do feel like a natural extension to the language as they’ve been implemented. As the CTP notes state, including exception filters with C# increases feature parity with VB and F# (although I still prefer F#’s exception pattern matching, of course).

Traditionally, whenever we’ve caught an exception we’ve had to handle it somehow or rethrow it. Rethrowing exceptions requires more overhead to handle them again and it also has a strong tendency to jack up the call stack as far as reporting goes. If we want to take a different action according to some condition, we need some kind of conditional structure inside the handler. The following code shows a traditional approach to conditionally handling an exception based on its Message property.

try
{
  throw new Exception("Forced Exception");
}
catch (Exception ex)
{
  if (ex.Message.Equals("Forced Exception", StringComparison.InvariantCultureIgnoreCase))
  {
    WriteLine("Filtered handler");
  }

  throw;
}

The preceding snippet is pretty simple but it’s easy to see how more complex filtering scenarios could complicate the handler code. Exception filters improve this story by catching exceptions only when some criteria is met. If the criteria isn’t met, the exception processing logic proceeds as normal, either moving on to the next handler definition or bubbling up. Here’s the above example revised to use exception filtering:

try
{
  throw new Exception("Forced Exception");
}
catch (Exception ex) if (ex.Message.Equals("Forced Exception", StringComparison.InvariantCultureIgnoreCase))
{
  WriteLine("Filtered handler");
}

Here you can see how the condition has been moved into the catch block itself via a contextual if statement. Now the handler body will execute if and only if the expression within the parenthesis evaluates to true. Now, at this point you might be thinking about ways to abuse this and you wouldn’t be alone. In fact, the CTP notes point out how a false returning function can be used as an exception filter to cause some side-effect (such as logging) related to the exception without actually handling the exception.

C# 6.0 – Declaration Expressions

[7/30/2015] This article was written against a pre-release version of C# 6.0. Be sure to check out the list of my five favorite C# 6.0 features for content written against the release!

[Update: 1 Oct 2014] It has been said that if something sounds too good to be true, it probably is. The adage apparently even applies to the proposed C# 6 features that I was excited about. I’m sad to say that it was announced today that declaration expressions would not be part of C# 6.

The C# 6.0 feature I’m probably the most excited about is declaration expressions. That the feature is listed as done on the language feature implementation status page and is included in the CTP only enhances my excitement.

Declaration expressions allow us to define local variables within an expression and scope them to the nearest block. As the CTP C# Features notes, this is particularly useful in conjunction with out parameters. Consider the traditional approach for working with an out parameter:

int result;
var myInt =
  int.TryParse("42", out result)
    ? result
    : 0;

I (like many others) hate this pattern, especially after having been so spoiled with F#’s approach of wrapping the call and returning both the status and parsed value as a tuple. Since C# probably won’t include usable Tuples anytime soon, declaration expressions provide a nice alternative as shown here:

var myInt =
  int.TryParse("42", out var result)
    ? result
    : 0;

In the previous snippet, we’ve moved the declaration of result into the TryParse call itself! Because I opted to use the conditional operator here, result is still visible within the parent context but other constructs (such as if statements) can limit the scope thus decreasing the risk of dealing with an uninitialized value. I call that a win.

Declaration expressions open up all sorts of other possibilities, too. For instance, the CTP notes show how they can improve the developer experience in cases where expressions are required using query expressions as an example:

var myInts =
  from ps in
    (from s in strings
     select (int.TryParse(s, out var result) ? result : (int?)null))
  where ps.HasValue
  select ps.Value;

Without declaration expressions, the preceding query expression would have required us to wrap the parsing logic within another function but because we can include the result variable definition inline, such a query is possible.

Finally, how many times have you written something like this?

var myStr = myObj as MyType;
if (myStr != null)
{
  ...
}

The as operator pattern doesn’t bother me nearly as much as the out parameter pattern but it still feels dirty since it exposes the variable outside of where it’s actually needed. Declaration expressions let us make this a bit prettier, again by moving the declaration inline with the if statement as follows:

if ((var myStr = myObj as string) != null)
{
  ...
}

I like this approach a bit better than the traditional approach not because it lets us write less code (it doesn’t, really since the lines are just merged) but because it limits the scope of myStr.

So yeah, I’m pretty excited about this feature and think it’s safe to assume that I’ll be adopting it just as quickly as I do primary constructors once it’s generally available.

C# 6.0 – Index Initializers

[7/30/2015] This article was written against a pre-release version of C# 6.0. Be sure to check out the list of my five favorite C# 6.0 features for content written against the release!

After two posts of new C# features that I like I thought it would be fun to change the pace a bit and discuss one that my initial impressions leave me questioning its usefulness: index initializers. I think the reason I’m confused about this feature is that all of the examples I’ve seen around it show dictionary initialization like this:

var numbers =
    new Dictionary<int, string>
    {
        [7] = "seven",
        [9] = "nine",
        [13] = "thirteen"
    };

My problem with this is that C# has allowed initializing dictionaries using a very similar manner since version 3! Here’s the same dictionary initialized with the more traditional object initializer syntax:

var numbers =
    new Dictionary<int, string>
    {
        { 7, "seven" },
        { 9, "nine" },
        { 13, "thirteen" }
    };

So we’ve swapped pairs of curly braces for pairs of square brackets and some commas with assignments? I just don’t see the benefit. It’s not that I’m opposed to index initializers; I’m confused as to why another syntax is needed for something we’ve been able to do in almost exactly the same way for years. The hint that may clear up the reasoning comes from the CTP notes which state:

We are adding a new syntax to object initializers allowing you to set values to keys through any indexer that the new object has

Given that statement, I can see indexer initializers being more useful in conjunction with custom types. Even still, it seems like the only benefit would really be foregoing specific Add method overloads on the custom type but index initializers would still be at the mercy of said type having a compatible indexer property. Unfortunately index initializers aren’t available in the CTP so I can’t really experiment with them at this time and anything else I could say about the feature would be pure speculation.

C# 6.0 – Using Static

[7/30/2015] This article was written against a pre-release version of C# 6.0. Be sure to check out the list of my five favorite C# 6.0 features for content written against the release!

In this installment of my ongoing series covering likely C# 6 language features I’ll be covering one of the features listed as “Done” on the Language feature implementation status page: using static. The idea behind using static is to allow importing members from static classes thus removing the requirement to qualify every member of an imported class with its owner. For the F#ers reading this, using static brings open module to C#.

Consider a method that writes some text to the console:

public void DoSomething()
{
  Console.WriteLine("Line 1");
  Console.WriteLine("Line 2");
  Console.WriteLine("Line 3");
}

In this example, we’ve had to specify the Console class three times. The using static feature we can simply invoke the WriteLine method:

using System.Console;

public void DoSomething()
{
  WriteLine("Line 1");
  WriteLine("Line 2");
  WriteLine("Line 3");
}

The primary benefit in this contrived example is eliminating redundancy but consider a more practical example which makes use of some of System.Math’s members:

class Circle(int radius)
{
  public int Radius { get; } = radius;

  public double GetArea()
  {
    return Math.PI * Math.Pow(Radius, 2);
  }
}

Here we’ve had to qualify both PI and Pow with the Math class. Granted, it only appears twice when calculating the area of a circle but it’s easy to imagine the amount of noise it would generate in more complex computations. In these cases, using static is less about eliminating redundancy and more about letting you stay focused on the problem as you can see in this revised example:

using System.Math;

class Circle(int radius)
{
  public int Radius { get; } = radius;

  public double GetArea()
  {
    return PI * Pow(Radius, 2);
  }
}

With both references to the Math class removed from the GetArea function, its much more readable.

I have to admit that I’m pretty excited about this feature. I can see it going a long way toward making code more maintainable.

C# 6.0 – Primary Constructors and Auto-Implemented Property Initializers

[7/30/2015] This article was written against a pre-release version of C# 6.0. Be sure to check out the list of my five favorite C# 6.0 features for content written against the release!

[Update: 1 Oct 2014] It has been said that if something sounds too good to be true, it probably is. The adage apparently even applies to the proposed C# 6 features that I was excited about. I’m sad to say that it was announced today that primary constructors would not be part of C# 6. It also sounds like there will be some changes around readonly auto-implemented properties.

As much as I prefer working in F#, I can’t ignore the fact that most of my work is still in C#. With Visual Studio “14” now in CTP 2 with some of the C# 6.0 features, it makes sense to take a more serious look at what’s in the works or has already been implemented. As such, I’ll be spending the next few articles describing some of these features and capturing my initial thoughts about them. In this article I’ll cover auto-implemented property initializers and primary constructors. Although these are separate features I suspect they’ll often be used together so it seems appropriate to discuss them at the same time. As with any CTP, everything I examine here is definitely subject to change but information regarding language feature implementation status can be found on the Roslyn Codeplex page.

Anyone familiar with F# should immediately recognize both of these features because they’ve been available in F# for years. I think both of these features are a nice addition to C# because they have the potential to greatly reduce the language’s verbosity and bring some feature parity with F# but I still like F#’s approach better.

Auto-Implemented Property Initializers

Auto-implemented property are being enhanced in two ways: they can be initialized inline, and you can define them without a setter. With inline initialization we can provide an initial value for the auto-implemented property without having to manually set the property via a constructor. For instance, if we have a Circle class with an auto-implemented Radius property we could initialize it as follows:

public class Circle
{
    public int Radius { get; set; } = 0;
}

What’s nice about the initializer syntax is that it sets the generated backing field rather than explicitly invoking the setter through a constructor or other mechanism. This feature also allows us to define a getter-only auto-implemented property, like this:

public class Circle
{
    public int Radius { get; } = 0;
}

As much as I appreciate these enhancements and will happily embrace them when they’re available, it bugs me that the type is still required in the property definition. It would be really nice to have the C# compiler infer the type from the initializer like F# does but for now, this is a nice start.

Primary Constructors

Primary constructors provide a mechanism by which a class (or struct) can accept parameters without a formal constructor declaration by including them in the class definition. The values defined in the primary constructor are scoped to the class but their lifetime is limited to class initialization by default. This makes them perfect for setting fields or initializing auto-implemented properties. Here we include a primary constructor for the Circle class and use it to initialize the Radius property:

public class Circle(int radius)
{
    public int Radius { get; } = radius;
}

The scoping rules for the values identified in the primary constructor are one place that C#’s primary constructors differ from F#’s (yes, I prefer F#’s approach here, too). As I previously mentioned, by default the primary constructor values are available only during class initialization. This means that while you’re free to use them for initialization, you can’t reference them in any methods. For instance, if we wanted to include a GetArea method our Circle class, the following approach would be invalid:

public class Circle(int radius)
{
    public int Radius { get; } = radius;

    public double GetArea()
    {
        return Math.PI * Math.Pow(radius, 2);
    }
}

It would be really nice if we could include an access modifier or attribute instructing the compiler to automatically generate a field but for now, it looks like we’ll have to make due with initialization scoping.

For cases where you need to do more than some basic initialization (such as parameter validation), it’s possible to define a primary constructor body by wrapping the statements in a pair of curly braces (of course, more braces) within the type definition.

As with the auto-implemented property initializers, I really appreciate this feature but wish it would go a bit further. Generally speaking, though, I see this as a positive feature for the language.

Clean Code, Evolved

Bob Martin’s 2008 book, Clean Code, is considered by many to be one of the “must read” books for software developers, and for good reason: The guidelines discussed in the book aim to decrease long-term software maintenance costs by improving code readability. One of the most beautiful aspects of Clean Code is that although it was written with an emphasis on Java development, the guidelines are applicable to software development in general.

Clean Code was written to combat the plague of unmaintainable code that creeps into software projects. You know the kind: intertwined dependencies, useless comments, poor names, long functions, functions with side-effects, and so on. Each of these things make code difficult to read but some are more nefarious because they make code fragile and difficult to reason about. The end result is code that is difficult and expensive to maintain or extend.

In March I had the good fortune to speak at Nebraska Code Camp. I was really excited about the way the schedule worked out because I was able to sit in on Cory House’s talk about Clean Code. As Cory walked through the points I remembered reading the book and pondered its impact on how I write code. Eventually, my thoughts drifted to the climate that necessitated such a book and its continued relevance today. In the days following Cory’s talk, I reread Clean Code and further developed my thoughts on these subjects. (Confession: I skipped the chapters on successive refinement, JUnit internals, and refactoring SerialDate this time around.) What I came to realize is that while the Clean Code guidelines are an important step toward improving code quality, many of them simply identify deficiencies in our tools and describe ways to avoid them. Several of the guidelines gently nudge us toward functional programming but they stop short of fixing the problems, instead relying on programmer discipline to write cleaner code.

There is no doubt that understanding and embracing the Clean Code guidelines leads to higher code quality but relying on developer discipline to enforce them isn’t enough. We need to evolve Clean Code. (more…)

Revisiting the Using Function

A little over a year ago I wrote about replicating F#’s using function for use in C#. Since I wrote that piece I’ve reconsidered the approach a bit. At the time, my team wasn’t using static code analysis (I know, shame on us) so I didn’t consider that passing the IDisposable instance to the function directly can sometimes cause the static analysis to raise warning CA2000.

To recap the previous post on this subject, here’s the original version of the method:

public static TResult Using<TResource, TResult>(TResource resource, Func<TResource, TResult> action)
    where TResource : IDisposable
{
    using (resource) return action(resource);
}

With this approach, Using requires you to supply an IDisposable instance. When using a factory method such as Image.FromFile as shown next, the warning isn’t raised:

var dimensions =
    IDisposableHelper.Using(
        Image.FromFile(@"C:\Windows\Web\Screen\img100.png"),
        img => new Size(img.Width, img.Height));

Quite often, though, we create instances directly via the new operator. Consider reading the contents of a file with a StreamReader, like this:

var contents =
    IDisposableHelper.Using(
        new StreamReader(@"C:\dummy.txt"),
        reader => reader.ReadToEnd());

Creating an IDisposable instance with the new operator results in the CA2000 warning. We know that we’re disposing the StreamReader but it still fails the static analysis checks. We could suppress the warning but we can easily avoid it altogether by redefining the Using method as follows:

public static TResult Using<TResource, TResult>(Func<TResource> resourceFactory, Func<TResource, TResult> action)
    where TResource : IDisposable
{
    using (var resource = resourceFactory()) return action(resource);
}

Now, instead of accepting an IDisposable instance directly, Using accepts a factory function that returns the IDisposable instance. Then, inside the Using method, we invoke the factory function and assign the result to resource. All that remains is to update the offending code to reflect the signature change:

var contents =
    IDisposableHelper.Using(
        () => new StreamReader(@"C:\dummy.txt"),
        reader => reader.ReadToEnd());

This revised approach gives us another benefit – it defers creating the IDisposable instance until Using is executing and keeps it scoped to the actual using block within the method.