Languages

Back to Basics: Overlooking the Obvious

Early Wednesday morning I arrived at the Kansas City Convention Center for KCDC, one of my favorite conferences. I chose the right door to enter the facility because immediately inside was my good friend Heather Downing.

Heather was hard at work putting the finishing touches on part of VML‘s sponsor booth exhibit – an Alexa skill to retrieve session and speaker information from the KCDC conference API. She already had the device responding to utterances such as “Alexa, ask KCDC when {speaker name} is speaking” and “Alexa, ask KCDC which sessions are about {topic}” but the final task, “Alexa, ask KCDC which sessions are next” was misbehaving by not returning the list of sessions at 8:30 on Thursday morning. She was clearly getting frustrated after fighting with it for some time and asked for a rubber duck and a second set of eyes.

The conference’s API returned sessions according to which time slot they’re in rather than the exact time so Heather had a list of objects that mapped the slots to the time along with some additional information unrelated to this issue. The method for selecting the time slot looked basically like this (simplified for brevity):

public static KeyValuePair<string, DateTime> GetNextSlot(DateTime start) =>
  SharedValues
    .SlotInformation
    .Where(sv => sv.Value >= start)
    .OrderBy(sv => sv.Value)
    .FirstOrDefault();

The SharedValues type exposes readonly fields that describe the various slots. Since this particular skill was intended for demonstration purposes and the slots weren’t changing, all of the values were hard-coded.

The call to GetNextSlot was equally straightforward:

// searchDate was UtcNow converted to Central time
var nextSlot = GetNextSlot(searchDate);

Since the next defined slot occurred at 8:30 AM (Central) it was bizarre that GetNextSlot was returning null. At first we thought that there was something was off with the conversion from UTC to Central time but even that should have returned something since the offset is only six hours and we were testing on Wednesday, not Thursday!

We mucked around with several different permutations before landing on the culprit – the initalization order in SharedValues!

As she had been building out the application Heather realized she needed the slot times in multiple places so she extracted the values to separate readonly fields so they could be referenced independently. The resulting code looked like this:

public static class SharedValues
{
  public static readonly List SlotInformation =
    new List
    {
      new SlotDescription ("Thursday, 8:30 AM", Slot1),
      new SlotDescription ("Thursday, 9:45 AM", Slot2),
      new SlotDescription ("Thursday, 11:00 AM", Slot3),
      new SlotDescription ("Thursday, 1:00 PM", Slot4),
      new SlotDescription ("Thursday, 2:45 PM", Slot5),
      new SlotDescription ("Thursday, 3:30 PM", Slot6)
    };

  public static readonly DateTime Slot1 =
    new DateTime(2017, 8, 3, 8, 30, 0);

  public static readonly DateTime Slot2 =
    new DateTime(2017, 8, 3, 9, 45, 0);

  public static readonly DateTime Slot3 =
    new DateTime(2017, 8, 3, 11, 0, 0);

  public static readonly DateTime Slot4 =
    new DateTime(2017, 8, 3, 13, 0, 0);

  public static readonly DateTime Slot5 =
    new DateTime(2017, 8, 3, 14, 15, 0);

  public static readonly DateTime Slot6 =
    new DateTime(2017, 8, 3, 15, 30, 0);
}

At first everything looks fine but sure enough, the fields are initialized in the order the compiler encounters them so SlotInformation is initialized first followed by each of the Slot values. Since each Slot value is DateTime and DateTime is a value type, the default value of 01/01/0001 00:00:00 is used when the SlotDescription instances are created!

As soon as we swapped around the initialization order (putting the Slot values before SlotInformation) everything worked exactly as we expected and we were finally able to ask “Alexa, ask KCDC which sessions are next.”

This just goes to show how experienced developers get caught up in a problem, they can spend way too much time digging into the complex while totally overlooking the obvious. If you’d like to observe this behavior in action check out the code sample over on repl.it.

Advertisement

C# 6 Verbatim Interpolated String Literals

It’s no secret that string interpolation is one of my favorite C# 6 language features because of how well it cleans up that messy composite formatting syntax. A few days ago I was working on some code that contained a verbatim string that included a few of the classic tokens and I wondered if string interpolation would work with verbatim strings as well.

After a quick glance over the MSDN documentation for string interpolation revealed nothing I just decided to give it a shot by adding a dollar sign ($) ahead of the existing at (@) sign and filled the holes with the appropriate expressions. Much to my delight this worked beautifully as shown in the snippet below!

var name = "Dave";
var age = 36;

var sentence = $@"Hello!
My name is {name}.
I'm {age} years old.";

Console.WriteLine(sentence);

I later discovered that even though this feature doesn’t seem to be listed in the MSDN documentation (at least anywhere I could find it) that it is explicitly called out in the C# 6 draft specification so hopefully it’ll find its way to the rest of the documentation at some point.

Functional C#: Debugging Method Chains

One of the most common questions I get in regard to the Map and Tee extension methods I presented in my recent Pluralsight course is “That’s great…but how do I debug these chains?” I get it – debugging a lengthy method chain can seem like a monumental task upon first glance but I assure you, it really isn’t all that difficult or even much different from what you’re accustomed to with more traditional, imperative C# code.

I’ve found that when debugging method chains I typically already have a good idea where the problem is. Spoiler: It’s in the code I wrote. That means that I can almost always automatically rule out any chained in framework or other third-party library methods as the source of the problem. It also means that setting a breakpoint within a chained lambda expression or method is often an adequate first step in isolating the problem. This is especially useful when working with pure, deterministic methods because you can then write a test case around the method in question and already have the breakpoint right where you need it.

In some situations though, you want to follow computation through the chain but constantly stepping through the extension methods can be both tedious and distracting, especially when the chained method is outside of your control and won’t be stepped into anyway. Fortunately this is easily resolved with a single attribute.

The System.Diagnostics.DebuggerNonUserCodeAttribute class is intended specifically for this purpose. As MSDN states, this attribute instructs the debugger to step through rather than into the decorated type or member. You can either apply this attribute to individual methods or to the extension class to prevent the methods from disrupting your debugging experience. For my projects I opted to simply suppress all of the extension methods by decorating the class like this:

[DebuggerNonUserCodeAttribute]
public static class FunctionalExtensions
{
    public static TResult Map<TSource, TResult>(
        this TSource @this,
        Func<TSource, TResult> map) => map(@this);

    // -- Snipped --
}

With the attribute applied, you can simply set a breakpoint on the chain and step into it as you normally would. Then, instead of having to walk through each of the extension methods you’ll simply be taken right into the chained methods or lambda expressions.

Enjoy!

Functional C#: Chaining Async Methods

The response to my new Functional Programming with C# course on Pluralsight has been far better than I ever imagined it would be while I was writing it. Thanks, everyone for your support!

One viewer, John Hoerr, asked an interesting question about how to include async methods within a chain. I have to be honest that I never really thought about it but I can definitely see how it would be useful.

In his hypothetical example John provided the following three async methods:

public async Task<int> F(int x) => await Task.FromResult(x + 1);
public async Task<int> G(int x) => await Task.FromResult(x * 2);
public async Task<int> H(int x) => await Task.FromResult(x + 3);

He wanted to chain these three methods together such that the asynchronous result from one task would be passed as input to the next method. Essentially he wanted the asynchronous version of this:

1
    .Map(H)
    .Map(G)
    .Map(F);

These methods can’t be chained together using the Map method I defined in the course because each of them want an int value rather than Task<int>. One thing John considered was using the ContinueWith method.

1
    .Map(H)
    .ContinueWith(t => G(t.Result))
    .ContinueWith(t => F(t.Result.Result));

This approach does play well with method chaining because each method returns a task that exposes the ContinueWith method but it requires working with the tasks directly to get the result and hand it off to the next method. Also, as we chain more tasks together we have to drill through the results to get to the value we really care about. Instead what we’re looking for is a more generalized approach that can be used across methods and at an arbitrary level within the chain.

After some more discussion we arrived at the following solution:

public static async Task<TResult> MapAsync<TSource, TResult>(
    this Task<TSource> @this,
    Func<TSource, Task<TResult>> fn) => await fn(await @this);

Rather than working with TSource and TResult directly like the Map method does, MapAsync operates against Task<TResult>. This approach allows us to define the method as async, accept the task returned from one async method, and await the call to the delegate. The method name also gives anyone reading the code a good visual indication that it is intended to be used with asynchronous methods.

With MapAsync now defined we can easily include async methods in a chain like this:

await 1
    .Map(H)
    .MapAsync(G)
    .MapAsync(F);

Here we begin with the synchronous Map call because at this point we have an integer rather than a task. The call to H returns a Task so from there we chain in G and F respectively using the new MapAsync method. Because we’re awaiting the whole chain, it’s all wrapped up in a nice continuation automatically for us.

This version of the MapAsync method definitely covers the original question but there are two other permutations that could also be useful.

public static async Task<TResult> MapAsync<TSource, TResult>(
    this TSource @this,
    Func<TSource, Task<TResult>> fn) => await fn(@this);

public static async Task<TResult> MapAsync<TSource, TResult>(
    this Task<TSource> @this,
    Func<TSource, TResult> fn) => fn(await @this);

Both of these overloads awaits results at different points depending on the input or output but they each operate against a Task at some point.

So there you have it, a relatively painless way to include arbitrary async methods within a method chain.

Thanks, John, for your question and your contributions to this post!

Have fun!

Reshaping Data with R

I was scrolling through my Facebook feed yesterday afternoon and one post stood out among the endless stream of memes. A friend had a data transformation question! She said:

Data handling question! I fairly frequently receive “tall” client data that needs to be flattened to be functional. So for example, Jane Doe may have twelve rows of data, each for a different month of performance, and I need to compress this so that her monthly performance displays on one row only, with columns for each month. Anyone know of a relatively painless way to do this quickly? I’ve been sending files like this to our data guy to flatten, but I’d love to be able to do it myself.

I immediately thought: “Hey! This sounds like a job for R!” Unfortunately, my R knowledge is limited to knowing that it’s a statistical programming language so in the interests of both helping out my friend and getting a little experience with something I’ve wanted to play around with, I set out to see how easy – or difficult – this task would be in R. It turned out to be quite simple.

Naturally I didn’t even have the R environment installed on my laptop so I obtained the installer from the R project site. Once it installed, I was off.

I started by creating a CSV that would simulate the scenario she described. My highly contrived data looked like this:

Name Month Value
Jane Doe January 10
Jane Doe February 11
Jane Doe March 5
Jane Doe April 8
Jane Doe May 10
Jane Doe June 15
Jane Doe July 12
Jane Doe August 8
Jane Doe September 11
Jane Doe October 7
Jane Doe November 9
Jane Doe December 13
John Doe January 5
John Doe February 3
John Doe March 8
John Doe April 7
John Doe May 6
John Doe June 5
John Doe July 5
John Doe August 8
John Doe September 4
John Doe October 2
John Doe November 5
John Doe December 9

With some source data ready it was time to move onto reading the data. It turns out that R has some nice capabilities for reading CSV files so I loaded it as follows:

df <- read.csv(file="C:\\Dev\\Data.csv", header = TRUE)

Here we’re reading the CSV and binding the resulting data frame to df.

Next, I needed to transform the data. After a bit of research I found there are a few ways to do this but using the reshape2 package seemed like the easiest approach. I installed the package and referenced it in my script.

library(reshape2)

Reshape2 is designed to transform data between wide and tall formats. Its functionality is built around two complementary functions; melt and cast. The melt function handles transforming wide data to a tall format whereas cast transforms tall data into a wide format. Given that my friend was trying to convert from tall to wide, cast was clearly the one I needed.

The cast function comes in two flavors: dcast and acast which result in either a new data frame or a vector/matrix/array respectively. To (hopefully) keep things simple I opted for the data frame approach and used dcast for this transformation.

To transform the data I simply needed to apply the dcast function to the CSV data. That is done like this:

pivot = dcast(df, Name~Month)

Here, the first argument df is our CSV data. The second argument specifies the column names that will serve as the axis for the transformation.

Finally I needed to persist the data. Just as with loading, R can easily write a CSV file.

write.csv(pivot, file="C:\\Dev\\Pivot.csv")

Finally, I executed the script and here’s the result:

Name April August December February January July June March May November October September
1 Jane Doe 8 8 13 11 10 12 15 5 10 9 7 11
2 John Doe 7 8 9 3 5 5 5 8 6 5 2 4


Although the data was transformed correctly it wasn’t quite what I expected to see. There are two problems here:

  1. The row numbers are included
  2. The months are alphabetized

It turns out that both of these problems are easy to solve. Let’s first handle eliminating the row numbers.

To exclude the row numbers from the resulting CSV we simply need to set the optional row.names parameter to FALSE in the write.csv function call.

write.csv(pivot, file="C:\\Dev\\Pivot.csv", row.names=FALSE)

Fixing the column order isn’t quite as simple but it’s really not bad either; we simply need to manually specify the column order by treating the frame as a vector and using the concatenation function to specify the new order.

In the interest of space I used the index approach but it’s also acceptable to use the column names instead.

pivot = dcast(df, Name~Month)[,c(1,6,5,9,2,10,8,7,3,13,12,11,4)]

Putting this all together, the final script now looks like this:

library(reshape2)

df <- read.csv(file="C:\\Dev\\Data.csv", header = TRUE)

pivot = dcast(df, Name~Month)[,c(1,6,5,9,2,10,8,7,3,13,12,11,4)]

write.csv(pivot, file="C:\\Dev\\Pivot.csv", row.names=FALSE)

Run this version and you’ll get the following, cleaned up result:

Name January February March April May June July August September October November December
Jane Doe 10 11 5 8 10 15 12 8 11 7 9 13
John Doe 5 3 8 7 6 5 5 8 4 2 5 9

So I ask this of my friends and readers that are more versed in R than I. Is this the preferred way to handle such transformations? If not, how would you approach it?

My 5 Favorite C# 6.0 Features

With C# 6 nearly upon us I thought it would be a good time to revisit some of the upcoming language features. A lot has changed since I last wrote about the proposed new features so let’s take a look at what’s ahead.

I think it’s fair to say that have mixed feelings about this release as far as language features go especially since many of my favorite proposed features were cut. I was particularly sad to see primary constructors go because of how much waste is involved with explicitly creating a constructor, especially when compared to the same activity in F#. Similarly, declaration expressions would have nearly fixed another of my least favorite things in C# – declaring variables to hold out parameters outside of the block where they’re used. Being able to declare those variables inline would have been a welcome improvement. That said, I think there are some nice additions to the language. Here’s a rundown of my top five favorite C# 6.0 language features in no particular order.

Auto-Property Initialization Enhancements

The auto-property initialization enhancements give us a convenient way to define an auto-property and set its initial value without necessarily having to also wire up a constructor. Here’s a trivial example where the Radius property on a Circle class is initialized to zero:

public class Circle
{
    public int Radius { get; set; } = 0;
}

The same syntax is available for auto-properties with private setters like this:

public class Circle
{
    public int Radius { get; private set; } = 0;
}

So we saved a few keystrokes by not needing a constructor. Big deal, right? If this were the end of the story, this feature definitely wouldn’t have made this list but there’s one more variant that I’m really excited about: getter-only auto-properties. Here’s the same class revised to use a getter-only auto-property:

public class Circle
{
    public int Radius { get; } = 0;
}

Syntactically these don’t differ much but behind the scenes is a different story. What getter-only auto-properties give us is true immutability and that’s why I’m so excited about this feature. When using the private setter version, the class is immutable from the outside but anything inside the class can still change those values. For instance,

in the private setter version it would be perfectly valid to have a Resize method that could change Radius. Getter-only auto-properties are different in that the compiler generates a readonly backing field making it impossible to change the value from anywhere outside of the constructor. This is important because now, when combined with a constructor we have a fairly convenient mechanism for creating immutable objects.

public class Circle
{
    public int Radius { get; }

    public Circle(int radius)
    {
        Radius = radius;
    }
}

Now this isn’t quite as terse as F#’s record types but by guiding developers toward building immutable types, it’s definitely a step in the right direction.

Using Static

Using static allows us to access static methods as though they were globally available without specifying the class name. This is largely a convenience feature but when applied sparingly it can not only simplify code but also make the intent more apparent. For instance, a method to find the distance between two points might look like this:

public double FindDistance(Point other)
{
    return Math.Sqrt(Math.Pow(this.X - other.X, 2) + Math.Pow(this.Y - other.Y, 2));
}

Although this is a simple example, the intent of the code is easily lost because the formula is broken up by references to the Math object. To solve this, C# 6 lets us make static members of a type available without the type prefix via new using static directives. Here’s how we’d include System.Math’s static methods:

using static System.Math;

Now that System.Math is imported and its methods are available in the source file we can then remove the references to Math from the remaining code which leaves us with this:

public double FindDistance(Point other)
{
    return Sqrt(Pow(this.X - other.X, 2) + Pow(this.Y - other.Y, 2));
}

Without the references to Math, the formula becomes a bit clearer.

A side-benefit of using static is that we’re not limited to static methods – if it’s static, we can use it. For example, you could include a using static directive for System.Drawing.Color to avoid having to prefix every reference to a color with the type name.

Expression-Bodied Members

Expression-bodied members easily count as one of my favorite C# 6 feature because they elevate the importance of expressions within the traditionally statement-based language by allowing us to supply method, operator, and read-only property bodies with a lambda expression-like syntax. For instance, we could define a ToString method on our Circle class from earlier as follows:

public override string ToString() => $"Radius: {Radius}";

Note that the above snippet uses another new feature: string interpolation. We’ll cover that shortly.

At first glance it may appear that this feature is somewhat limited because C#’s statement-based nature automatically reduces the list of things that can serve as the member body. For this reason I was sad to see the semicolon operator removed from C# 6 because it would have added quite a bit of power to expression-bodied members. Unfortunately the semicolon operator really just traded one syntax for an only slightly improved syntax.

If expression-bodied members are so limited why are they on this list? Keep in mind that any expression can be used as the body. As such we can easily extend the power of expression-bodied members by programming in a more functional style.

Consider a method to read a text file from a stream. By using the Disposable.Using method defined in the linked functional article, we can reduce the body to a single expression as follows:

public static string ReadFile(string fileName) =>
    Disposable.Using(
        () => System.IO.File.OpenText(fileName),
        r => r.ReadToEnd());

Without taking advantage of C#’s functional features, such an expression wouldn’t be possible but in doing so we greatly extend the capabilities of this feature. Essentially, whenever the property or method body would be reduced to a return statement by using the described approach, you can use an expression-bodied member instead.

String Interpolation

Ahhh, string interpolation. When I first learned that this feature would be included in C# 6 I had flashbacks to the early days of my career when I was maintaining some Perl CGI scripts. String interpolation was one of those things that I always wondered why it wasn’t part of C# but String.Format, while moderately annoying, always worked well enough that it wasn’t particularly a problem. Thanks to a new string literal syntax, C# 6 will let us define format strings without explicitly calling String.Format by allowing us to include identifiers and expressions within holes in the literal. The compiler will detect the string and handle the formatting as appropriate by filling the holes with the appropriate value.

To define an interpolated string, simply prefix a string literal with a dollar sign ($). Anything that should be injected into the string is simply included inline in the literal and enclosed in curly braces just as with String.Format. We already saw an example of string interpolation but let’s take another look at the example:

public override string ToString() => $"Radius: {Radius}";

Here, we’re simply returning a string that describes the circle in terms of its radius using string interpolation. String.Format is conspicuously missing and rather than a numeric placeholder we directly reference the Radius property within the string literal. Just as with String.Format, we can also include format and alignment specifiers within the holes. Here’s the ToString method showing the radius to two decimal places:

public override string ToString() => $"Radius: {Radius:0.00}";

One of the things that makes string interpolation so exciting is that we’re not limited to simple identifiers; we can also use expressions. For instance, if our ToString method was supposed to show the circle’s area instead, we could include the expression directly as follows or even invoke a method:

public override string ToString() => $"Area: {PI * Pow(Radius, 2)}";

The ability to include expressions within interpolated strings is really powerful but, as Bill Wagner recently pointed out, the compiler can get tripped up on some things. Bill notes the conditional operator as one such scenario. When the conditional operator is included in a hole the colon character confuses the compiler because the colon is used to signify both the else part of the conditional operator and to delimit the format specifier in the hole. If this is something you run into, simply wrap the conditional in parenthesis to inform the compiler that everything within the parens is the expression.

Null-Conditional Operators

Finally we come to the fifth and final new feature in this list; a feature I consider to be a necessary evil: the null-conditional operators. The null conditional operators are a convenient way to reduce the number of null checks we have to perform when drilling down into an object’s properties or elements by short-circuiting when a null value is encountered. To see why this is useful consider the following scenario.

Imagine you have an array of objects that represent some type of batch job. These objects each have nullable DateTime properties representing when the job started and completed. If we wanted to determine when a particular job completed we’d not only need to make sure that the item at the selected index isn’t null but also that the completed property isn’t null, either. Such code might look like this:

DateTime? completed = null;

if(jobs[0] != null)
{
    if(jobs[0].Completed != null)
    {
        completed = jobs[0].Completed;
    }
}

WriteLine($"Completed: {completed ?? DateTime.MaxValue}");

That’s quite a bit of code for something rather trivial and it distracts from the task of getting the completed time for a job. That’s where the null-conditional operators come in. By using the null-conditional operators, we can reduce the above code to a single line:

WriteLine($"Completed: {jobs?[0]?.Completed ?? DateTime.MaxValue}");

This snippet demonstrates both of the null-conditional operators. First is the ? ahead of the indexer. This returns if the element at that index is null. Next is the ?. operator which returns if the member on the right is null. Finally, we see how the null-conditional operators can be used in conjunction with the null-coalescing operator to combine the giant if block into a single expression.

So why do I consider this feature a necessary evil? The reason is that I consider null to be evil, null references have been called The Billion Dollar Mistake, and Bob Martin discussed the evils of null in Clean Code. In general, nulls should be avoided and dealing with them is a costly nuisance. I think that these null-conditional operators, which are also sometimes collectively referred to as the null-propagation operators, will do just what that name implies – rather than promoting good coding practices where null is avoided, including the null-conditional operators will encourage developers to sloppily pass or return null rather than considering whether null is actually a legitimate value with the context (hint: it’s not). Unfortunately, null is an ingrained part of C# so we have to deal with it. As such, the null-conditional operators seem like a fairly elegant way to reduce null’s impact while still allowing it exist.

Wrap-up

There you have it, my five favorite C# 6 language features: auto-property initialization enhancements, using static, expression-bodied members, string interpolation, and the null-conditional operators. I recognize that some popular features such as the nameof operator and exception filters didn’t make the cut. While I definitely see their appeal I think they’re limited to a few isolated use cases rather than serving as more general purpose features and as such I don’t anticipate using them all that often. Did I miss anything? Let me know in the comments.

Creating a Generative Type Provider

In my recently released Pluralsight course, Building F# Type Providers, I show how to build a type provider that uses erased types. To keep things simple I opted to not include any discussion of generative type providers beyond explaining the difference between type erasure and type generation. I thought I might get some negative feedback regarding that decision but I still believe it was the right decision for the course. That said, while the feedback I’ve received has been quite positive, especially for my first course, I have indeed heard from a few people that they would have liked to see generated types included as well.

There are a number of existing type providers that use generated types, most notably in my opinion is the SqlEnumProvider from FSharp.Data.SqlClient. That particular type provider generates CLI-enum or enum-like types which represent key/value pairs stored in the source database.

Although SqlEnumProvider is a great example and is relatively easy to follow, general how-to documentation for generative type providers is hard to come by to say the least. As such I thought that showing how to write the ID3Provider built in the course as a generative type provider would be a nice addendum for the course material. I clearly won’t be covering everything I do in the course here so if you’re looking for a deeper understanding of type providers I strongly encourage you to watch it before reading this article. (more…)

Building F# Type Providers on Pluralsight!

I was wrapping up The Book of F# and discussing the foreword with Bryan Hunter, he asked if I’d like to be connected to some of the folks at Pluralsight to discuss the possibility of an F# course. I agreed and a few days later I was on the phone brainstorming course ideas with them.

Of everything we discussed I was really only excited about a few topics enough to think I could put together a full course for them. Naturally the ones I was most excited about were already spoken for so I started trying to think of some other ideas. At that point I sort of fizzled out from seemingly endless distractions like changing jobs, speaking at a variety of events, and so on. Over the course of a few months I’d pretty much forgotten about the discussions. Fortunately for me, Pluralsight hadn’t forgotten and my acquisitions editor emailed me to see what happened.

We soon started talking again and one of the ideas I was originally excited about was now available and I’d been working on a related conference talk so I had the start of an outline. After a few iterations I was ready to start recording my Building F# Type Providers course.

Fast forward to earlier this week when I noticed some blog traffic from an unexpected source – my Pluralsight author profile page! I quickly discovered that my course was live!

BuildingTypeProvidersTitleSlide

If you’re wanting to learn more about one of F#’s most interesting features, I invite you to watch the course where I show a few existing type providers in action before walking through creating a simple type provider for reading the ID3 tag from an MP3 file using the Type Provider Starter Pack.

Functional C#: Fluent Interfaces and Functional Method Chaining

This is adapted from a talk I’ve been refining a bit. I’m pretty happy with it overall but please let me know what you think in the comments.

Update: I went to correct a minor issue in a code sample and WordPress messed up the code formatting. Even after reverting to the previous version I still found issues with escaped quotes and casing changes on some generic type definitions. I’ve tried to fix the problems but I may have missed a few spots. I apologize for any odd formatting issues.

I’ve been developing software professionally for 15 years or so. Like many of today’s enterprise developers much of my career has been spent with object-oriented languages but when I discovered functional programming a few years ago it changed the way I think about code at the most fundamental levels. As such I no longer think about problems in terms of object hierarchies, encapsulation, or and associated behavior. Instead I think in terms of independent functions and the data upon which they operate in order to produce the desired result. (more…)

January Indy F# Meetup

We’re on a roll! The third consecutive Indy F# Meetup is on Tuesday, January 20th at 7:00 PM. As always, we’ll be meeting at Launch Fishers. Check out the meetup page to register and for logistics information.

When we started the group we decided to alternate the format between dojos and lectures. Since last month was a type provider lecture this month will mark a return to the dojo format. We thought it would be fun to change pace and hone our recursion skills a bit by working through the community-driven fractal forest dojo. I haven’t worked through this one yet myself but I’ve seen lots of beautiful images tweeted by people who have so it should be a great time and experience. I hope you’ll join us!