Software Development

All things programming.

Building F# Type Providers on Pluralsight!

I was wrapping up The Book of F# and discussing the foreword with Bryan Hunter, he asked if I’d like to be connected to some of the folks at Pluralsight to discuss the possibility of an F# course. I agreed and a few days later I was on the phone brainstorming course ideas with them.

Of everything we discussed I was really only excited about a few topics enough to think I could put together a full course for them. Naturally the ones I was most excited about were already spoken for so I started trying to think of some other ideas. At that point I sort of fizzled out from seemingly endless distractions like changing jobs, speaking at a variety of events, and so on. Over the course of a few months I’d pretty much forgotten about the discussions. Fortunately for me, Pluralsight hadn’t forgotten and my acquisitions editor emailed me to see what happened.

We soon started talking again and one of the ideas I was originally excited about was now available and I’d been working on a related conference talk so I had the start of an outline. After a few iterations I was ready to start recording my Building F# Type Providers course.

Fast forward to earlier this week when I noticed some blog traffic from an unexpected source – my Pluralsight author profile page! I quickly discovered that my course was live!

BuildingTypeProvidersTitleSlide

If you’re wanting to learn more about one of F#’s most interesting features, I invite you to watch the course where I show a few existing type providers in action before walking through creating a simple type provider for reading the ID3 tag from an MP3 file using the Type Provider Starter Pack.

Functional C#: Fluent Interfaces and Functional Method Chaining

This is adapted from a talk I’ve been refining a bit. I’m pretty happy with it overall but please let me know what you think in the comments.

Update: I went to correct a minor issue in a code sample and WordPress messed up the code formatting. Even after reverting to the previous version I still found issues with escaped quotes and casing changes on some generic type definitions. I’ve tried to fix the problems but I may have missed a few spots. I apologize for any odd formatting issues.

I’ve been developing software professionally for 15 years or so. Like many of today’s enterprise developers much of my career has been spent with object-oriented languages but when I discovered functional programming a few years ago it changed the way I think about code at the most fundamental levels. As such I no longer think about problems in terms of object hierarchies, encapsulation, or and associated behavior. Instead I think in terms of independent functions and the data upon which they operate in order to produce the desired result. (more…)

FileZilla Server and Windows Azure

I was setting up a new virtual machine in Windows Azure today and wanted to host an FTP server. Having spent most of my career isolated inside corporate environments and largely disconnected from server administration this was fairly new ground for me.

I knew going into it that I was going to have to tweak some firewall rules and whatnot but establishing communication was a bit more involved than I initially expected.

The FTP solution I selected was FileZilla Server. It’s a rather robust solution that provides the security I wanted with minimal configuration. Getting the server components installed was effortless as was creating the security groups and users. Once I had everything configured the way I wanted I created the rules to allow traffic to hit ports 21 and 990 on the server through the Windows Firewall with Advanced Security control panel.

For my first test I simply tried to FTP to localhost on the server itself. Both accounts I’d configured worked perfectly. Then, to test the firewall rules I tried to connect from my development workstation but was unable to connect.

After scratching my head for a bit I remembered seeing endpoint configuration in the Azure portal. I added two endpoints, one for port 25 and one for port 990 and was then able to connect but the FTP client kept failing to retrieve a directory list. The log showed that the client was attempting to use passive mode which requires additional ports. I quickly found the passive mode settings in the FileZilla server options. From there I was able to specify a custom range which I could then allow to pass through the firewall. The other thing I needed to change was the IPv4 specific setting to force the server to use the server’s public virtual IP address as listed on the VM’s dashboard in the Azure Portal.

FileZilla Passive Mode Settings

FileZilla Passive Mode Settings

Just as before, simply adding the firewall rules wasn’t enough to allow communication. I had to add the passive mode ports as endpoints as well. I initially found this to be more than a bit tedious but fortunately the Add-AzureEndpoint PowerShell cmdlet eased some of the pain.

Azure FTP Endpoints

Azure FTP Endpoints

Once all the rules and endpoints were in place I was able to successfully connect from my development workstation to the server and get directory listings.

Resources:

Receiving Webhooks With IIS Express

One of the projects I’m currently working on is using a service that reports various events back to our system via webhooks. Since the features I’m working on aren’t ready for deployment yet I was looking for a decent way to test the integration in my development environment to ensure that I’m not only receiving the correct data but also that I’m handling it properly.

The service’s documentation recommended pointing the webhooks to another service such as RequestBin to inspect the contents. I did mess around with that approach for a bit and although I was certainly able to see the requests in the RequestBin log and push them on to the application with fiddler, it really didn’t seem like an adequate solution and I was tired so I went to bed.

It turns out that sleeping on it was exactly what I needed. Sometime overnight I subconsciously worked out a better solution; I could open up IIS Express to handle remote connections and configure NAT on my router to forward requests for that port directly to the IIS Express instance. It turns out that getting all this working was actually quite simple.

Allowing Remote Connections

Allowing remote connections to IIS Express requires a little work but it’s pretty straight-forward and is outlined in this stackoverflow post. In short we need to:

  1. Create an additional IP binding for the IIS Express site to allow traffic from all hosts.
  2. Allow connections to the port from anyone
  3. Create a firewall rule to allow traffic to the port on the development machine

Creating an IIS Express Binding

IIS Express sites are managed per-user. To create the IIS Express binding we simply need to create a new entry for the site in the configuration file located at %userprofile%\documents\iisexpress\config\applicationhost.config. In the file locate the site then duplicate the binding, changing the allowed host to *. For example, if the current binding is:

I’ve used port 99999 in these examples for demonstration purposes only. You’ll want to use the port listed in your configuration file.

<binding protocol="http" bindingInformation="*:99999:localhost" />

You’d create a copy and change localhost to * such that it reads like this:

<binding protocol="http" bindingInformation="*:99999:*" />

It’s very important that you leave the original binding in place. Yes, it is redundant to have a binding for all hosts and another for only localhost but Visual Studio uses the localhost binding to initialize IIS Express. If that binding isn’t present Visual Studio will create a duplicate site entry and you’ll likely start seeing errors such as the one pictured below.

URL Binding Failure

URL Binding Failure

Setting Security on the Port

Once you’ve created the IIS Express binding you need to allow connections to the port. This is done by executing target=”_blank”>netsh to add a URL reservation for the new binding. In this case we’ll be using netsh http add urlacl to register the address we bound to the IIS Express site and granting permission to everyone.

netsh http add urlacl url=http://*:99999/ user=everyone

Note that “everyone” refers to the Everyone group in Windows. If you’re using a non-English version you’ll need to change that to the localized name for your language.

Creating a Firewall Rule

The final step is allowing traffic to that port through the local firewall. Accomplishing this varies according to which firewall solution you’re using. For Windows firewall you can control this through the control panel or by executing the following netsh command which changes some advanced firewall configuration settings.

netsh advfirewall firewall add rule name=”IISExpressWeb” dir=in protocol=tcp localport=99999 profile=private remoteip=any action=allow

Configuring NAT

Configuring NAT is not something I can really help with in this article because each environment will have its own instructions and restrictions. For me and my home office network it was easy because I simply had to add a custom application that referenced the configured port and host machine in my router’s firewall configuration.

Alternatively, I could have configured the IP Passthrough to route traffic to the development machine but I deemed this to be too much exposure to the outside world and left it with NAT.

Accepting Webhooks

Once I’d configured everything on my network to accept the webhook traffic I went to the external application’s dashboard and registered my computer as a webhook recipient using the WLAN IP address I obtained from my router’s status page and the port I bound to IIS Express for the application. I then set a breakpoint in the webhook processing logic, ran the application, made a change in the remote system to initiate sending an event, then watched in amazement as my breakpoint was hit and the watch window showed data received from the remote service.

Mission accomplished.

Jumping on the Band(wagon)

A few months ago, my wife and I considered jumping into the quantified self scene by purchasing a pair of Fitbit devices. My biggest problem was that I didn’t really care that much about tracking my activity. Sure, being able to monitor my sleep patterns and that type of thing would be nice to know but if I was going to wear something all the time, I wanted it to do more – I wanted it to tie in to my calendar and other notifications. That’s where the Microsoft Band comes in.

My Microsoft BandThe Band was released in late October without much fanfare. I hadn’t even heard of it until it was released and I saw some buzz on Twitter. Even then I had no idea what it was and assumed it was a game or something. A bit later I decided to actually look it up and discovered that it was a wearable device that met my expectations perfectly. I wanted one.

Despite there being a Microsoft Store just a few miles from my house I figured that since I was about to leave for the MVP Summit I’d just grab one at the Bellevue store. Little did I know that the devices had sold out everywhere and it wouldn’t be until January until I could get one. I’ve now had my Band for about a week and have given it quite a workout. In all I think that despite a few flaws the device is quite impressive, especially for it’s first generation.

Extending the Phone

For me, the Band’s primary purpose is to be an extension of my phone and for the most part it plays its role quite well. I love that the Band vibrates to notify me of upcoming appointments, incoming calls (including Skype), text messages, emails, and social media messages. Before getting the Band I was constantly distracted by my phone. Glancing at the phone whenever it buzzed often required stopping whatever I was doing to fish it out of my pocket only to immediately dismiss the notification. With the Band, I can still stay up-to-date with all my notifications but since I wear it face-up on my right wrist seeing the notification typically requires only diverting my gaze. This also allows me to leave my phone outside of my immediate vicinity and continue receiving the notifications. In fact, as I’m typing this, my phone is charging in another room but I’m still getting the notifications on my wrist.

What I find really nice is that the Band doesn’t merely display notifications. By tapping on certain notifications, emails and text messages in particular, the device will display the first few lines of the message. This is particularly helpful for determining whether certain messages require immediate attention or can be deferred until later. Additionally, for text messages you can configure up to four predefined which you can select as a response to a message. It would be nice to have a few more slots but four seems like a decent starting point.

While the Band’s initial release is an adequate starting point in that it satisfies my basic requirements for such a device I still think there’s still plenty of room for improvement. The number one thing I want a future update to introduce is some additional actions for managing email. Currently, all the Band can do here is display part of a message. I’d like to be able to change the read/unread state, toggle a flag, or delete the message right from the Band. Including these options would go a long way toward further reducing my dependency on my phone. (I’ve entered a suggestion for this on UserVoice. Please give it some votes if you agree!)

Since I primarily use the Band for notifications, it would be nice if the “lock” screen (in quotes because the Band isn’t truly locked) would display some summary information such as the number of unread emails. I imagine this would be a configurable option and possibly only visible in watch mode (something else I love) but it would certainly make the information more accessible than unlocking the device and scrolling through the tiles. On a related note, reading a notification should clear the indicators on the tiles so it’s not necessary to dismiss the notification then go to the individual apps to remove the indicator. (Vote for this suggestion on UserVoice.)

The final major thing I’d really like to see improved here is the alarm system. I use my phone’s alarm feature extensively with different alarms set for different times and different days. I’d really like to see those better integrated with the band so I don’t have to set the same alarm in two places.

Cortana

I'm CortanaI love Cortana. I like Cortana so much I have a Cortana t-shirt and even have a figurine on my desk at work. I use Cortana regularly for creating appointments and reminders, checking headlines. I particularly love the context-based reminders that pop up when I talk to someone or arrive somewhere. But you know what? Before the Band most of my interactions with Cortana were text-based. I’d fire up the Cortana app on my phone and type my request. Yes, I could have used the voice features but doing so always felt awkward on the phone.

With the Band I’ve found that I’m using Cortana not only more frequently but more effectively, too. For instance, when I leave the office at the end of the day I typically call my wife to let her know I’m on my way home. Even if I were to use Cortana’s voice commands on the phone it required getting my phone from my pocket, waking it up, holding the search button, then telling Cortana to call her. Now all I have to do is hold a button on my wrist, say “call my wife” and next thing I know, my car’s stereo has switched to the call.

Health & Fitness

Sleep Tracker Microsoft primarily markets the Band as a fitness device using the tagline “Live healthier and be more productive” but for me, this is a tertiary concern. As such I haven’t really spent much time messing around with these features beyond the sleep tracker. In fact, I’ve never opened the run tracker and I’d be amazed if I ever decide to try out one of the workout plans.

The sleep tracker feature was clearly the part of this feature set that I was most interested in. The first night I tried it I fully expected it to tell me I’m not sleeping effectively. When I awoke I was surprised to see that it determined that my sleep was approximately 92% efficient. I was excited to share this figure with my wife who typically tells me I don’t sleep enough but it turned out that she had 94% efficiency so my excitement was short-lived.

Beyond the sleep tracker I occasionally glance at the step counter, calorie meter, and heart rate monitor. I can’t say I’ve ever manually counted my steps for an extended period of time or considered the other metrics so I can’t attest to their accuracy but they’re somewhat interesting nonetheless. Perhaps if I get the sudden urge to care I’ll pay a bit more attention to them.

Starbucks

I’m not normally much of a Starbucks fan but the Starbucks app on the Band makes it so convenient that it’s hard to turn down. When I purchased the band I received a $5 gift card which I registered on their site, added some more money, and entered the card into the Band app. Once the app is connected to a card, opening it displays a PDF417 bar code that represents the configured card. To use it, just display the code, swipe your wrist past the scanner, and watch the barista’s jaw drop in wonder at this new technology.

Despite its simplicity, my anecdotal experiences tell me that this is something that Microsoft should leverage more when promoting the device. It would be nice if the app could tie into the Starbucks system to obtain the remaining balance but I can see not having that feature at this time. I really think that expanding this feature beyond Starbucks to include gift and loyalty cards from other vendors would be a huge selling point.

Life Hack: I’ve been having a bit of extra fun with this Band feature. Now that I have the convenience of a reloadable gift card on my wrist all those Bing rewards points I’ve accumulated over the years but haven’t cashed in can now go to $5 Starbucks gift cards. I can then transfer the balance from those cards to the one I tied to my Band! I’ve already transferred three cards from Bing Rewards to keep the coffee coming.

Hardware & Comfort

In addition to the built-in microphone, haptic vibration motor, and Bluetooth 4.0, the Band’s spec sheet lists ten sensors:

  • Optical heart rate sensor
  • 3-axis accelerometer
  • Gyrometer
  • GPS
  • Ambient light sensor
  • Skin temperature sensor
  • UV sensor
  • Capacitive sensor
  • Galvanic skin response

The screen is a 320 x 106 pixel capacitive 1.4″ TFT color display and seems adequate for such a small device. My only real complaint in this area is that vertical scrolling on messages can be a bit cumbersome if you don’t hit the correct part of the screen. I’m gradually learning where the sweet spot is and have noticed this being less of a problem as I adjust.

The Band is intended to be worn constantly (removing for charging which takes about 2 hours, of course). I haven’t worn a watch in years so adjusting to having something on my wrist all the time has taken some adjustment. At first I had some skin irritation under the charging connector but that seems to have subsided and despite the occasional flare-up, I hardly notice the device unless it’s notifying me of something.

I’ve found the thermal plastic elastomer material used in the Band’s construction to be a bit stiff which makes putting on and removing the device somewhat clumsy but since it’s intended for constant use, this is hardly a concern. What worries me more is the construction around the battery compartments and of the connector.

Band ConnectorTo ensure an advertised two-days of operation (I’m noticing a bit less) the Band has two batteries – one in each strap. Prior to purchasing my Band I looked at plenty of display models and I noticed that the straps were actually pulling away from the battery compartments. It’s likely that this was a symptom of overuse and people not realizing that most of the strap isn’t flexible. So far I haven’t seen any signs of this problem on my Band or my wife’s but it’s still a concern.

On the other hand, the connector at the end of each strap seems rather flimsy to me. It’s a clip-based system with two tiny prongs that latch inside a track. So far they’ve seemed OK but as I was getting used to wearing it, I did catch the band on something and thought it might snap one of them. I’ve also heard a few reports of the connector weakening and giving out after about a month.

One thing that has definitely been a bit of a concern is that the device had been losing its Bluetooth connection with my phone on a somewhat regular basis. This generally required me to disable and re-enable Bluetooth on both my phone and the Band. I did a little research to see if anyone else was experiencing the problem and it seemed that others using the Band with a Lumia 1020 were finding that they had the problem if there were too many apps running in the background on the phone. The Battery Saver app didn’t show much of anything out of the ordinary but that got me thinking about one of the games I’d been playing. Since I stopped playing Bejeweled Live on the phone I haven’t seen the problem so I’m hoping that’s the culprit.

Overall Impressions

Now that I’ve lived with the Band and tried it out in a variety of conditions, I have to say I’m quite impressed with this first generation product despite a few rough spots like the message management capabilities or potential hardware issues. Having something that unobtrusively alerts me of incoming messages has greatly reduced my direct dependence upon my phone.

January Indy F# Meetup

We’re on a roll! The third consecutive Indy F# Meetup is on Tuesday, January 20th at 7:00 PM. As always, we’ll be meeting at Launch Fishers. Check out the meetup page to register and for logistics information.

When we started the group we decided to alternate the format between dojos and lectures. Since last month was a type provider lecture this month will mark a return to the dojo format. We thought it would be fun to change pace and hone our recursion skills a bit by working through the community-driven fractal forest dojo. I haven’t worked through this one yet myself but I’ve seen lots of beautiful images tweeted by people who have so it should be a great time and experience. I hope you’ll join us!

Extending F# Pipelines with a Tee Function

In functional programming we strive to minimize side-effects but not only are some side-effects desirable, in the largely object-oriented world in which many of us still operate such side-effects are often unavoidable. There are plenty of APIs that rely on side-effects particularly when it comes to initializing types or properties. One example that immediately comes to mind is building up an HttpResponseMessage in Web API 2. Consider the following snippet which creates a response containing the contents of a stream and sets some relevant header values:

member __.GetFile() =
  // ... SNIP ...
  let response = new HttpResponseMessage(HttpStatusCode.OK, Content = new StreamContent(stream))
  response.Content.Headers.ContentType <- MediaTypeHeaderValue("application/octet-stream")
  response.Content.Headers.ContentLength <- Nullable.op_Implicit stream.Length
  response.Content.Headers.ContentDisposition <- new ContentDispositionHeaderValue("attachment", FileName = "test.pdf")
  response

This code is straight-forward but it’s highly imperative. Like side-effects, imperative code isn’t necessarily a bad thing but it would be nice to tame it a bit by initializing the header values as part of a pipeline while still returning the response message. Doing so isn’t hard: just create the HttpResponseMessage instance via the constructor and pipe it to a function that does the initialization before returning, right?

member __.GetFile() =
  // ... SNIP ...
  new HttpResponseMessage(HttpStatusCode.OK, Content = new StreamContent(stream))
  |> (fun response -> response.Content.Headers.ContentType <- MediaTypeHeaderValue("application/octet-stream")
                      response.Content.Headers.ContentLength <- Nullable.op_Implicit stream.Length
                      response.Content.Headers.ContentDisposition <- new ContentDispositionHeaderValue("attachment", FileName = "test.pdf")
                      response)

This is a perfectly acceptable approach and is something I’ve definitely done plenty of times but all it has achieved is moving the explicit return into the function. After doing this a few times, you might start to think there has to be a way to standardize this pattern and you’d be right.

Over the holidays I finally found some time to relax and although I spent a great deal of time glued to Assassin’s Creed: Unity on my Xbox One I managed to read a few more articles than usual. Something that struck me as interesting was that I noticed a theme across several of the code samples: they were using a tee function within a pipeline. The tee function isn’t part of the core F# libraries and I couldn’t recall having encountered it before so I started doing some background investigation.

One of the first sites I found that mentioned the function in the context of F# was Scott Wlaschin’s excellent Railway Oriented Programming article which I’d read previously but clearly not thoroughly enough. In the article Scott says he named the function after a Unix command of the same name. The Unix command, which is named after plumbing tee fittings, splits a pipeline such that input flows to both standard output and a file. This is certainly useful for logging in shell scripts but its possibilities are much more interesting in an F# pipeline.

The tee function is a simple function which essentially says “given a value, apply a function to it, ignore the result, then return the original value.” It’s basic definition is as follows:

let inline tee fn x = x |> fn |> ignore; x

By introducing the tee function into the pipelined version of the GetFile method we can remove the explicit return:

member __.GetFile() =
  // ... SNIP ...
  new HttpResponseMessage(HttpStatusCode.OK, Content = new StreamContent(stream))
  |> tee (fun response -> response.Content.Headers.ContentType <- MediaTypeHeaderValue("application/octet-stream")
                          response.Content.Headers.ContentLength <- Nullable.op_Implicit stream.Length
                          response.Content.Headers.ContentDisposition <- new ContentDispositionHeaderValue("attachment", FileName = "test.pdf"))

Now the pipeline looks more like what we might expect since we’re no longer explicitly returning the response from the lambda expression.

Depending on your style preferences, injecting the tee function explicitly into the pipeline as you would a Seq.filter or other such function might bother you. To me, the tee function is a perfect candidate for a custom operator so let’s define one.

let inline ( |>! ) x fn = tee fn x

Here we’ve defined |>! as the tee operator (this is the same symbol that WebSharper uses). Notice how the parameter order is reversed from the tee function. This is due to the fact that when using our new operator, we’re not relying on partial application to invoke the tee function. Now we can eliminate the explicit reference to the function, making the operation look like a natural part of the F# language.

member __.GetFile() =
  // ... SNIP ...
  new HttpResponseMessage(HttpStatusCode.OK, Content = new StreamContent(stream))
  |>! (fun response -> response.Content.Headers.ContentType <- MediaTypeHeaderValue("application/octet-stream")
                       response.Content.Headers.ContentLength <- Nullable.op_Implicit stream.Length
                       response.Content.Headers.ContentDisposition <- new ContentDispositionHeaderValue("attachment", FileName = "test.pdf"))

Since the tee function/operator is intended to allow side-effects within a pipeline it is ideal for adding logging or other diagnostics into a pipeline (as was the intent in the original Unix command). For instance, to write out a message as each header value is set, we can simply split the tee’d function above into separate functions, inserting a tee’d logging function in between:

member __.GetFile() =
  // ... SNIP ...
  new HttpResponseMessage(HttpStatusCode.OK, Content = new StreamContent(stream))
  |>! (fun _ -> Debug.WriteLine "Created response")
  |>! (fun r -> r.Content.Headers.ContentType <- MediaTypeHeaderValue("application/octet-stream"))
  |>! (fun r -> Debug.WriteLine("Set content type: {0}",
                                [| box r.Content.Headers.ContentType.MediaType |]))
  |>! (fun r -> r.Content.Headers.ContentLength <- Nullable.op_Implicit stream.Length)
  |>! (fun r -> Debug.WriteLine("Set content length: {0}",
                                [| box r.Content.Headers.ContentLength.Value |]))
  |>! (fun r -> r.Content.Headers.ContentDisposition <- new ContentDispositionHeaderValue("attachment", FileName = "test.txt"))
  |>! (fun r -> Debug.WriteLine("Set content disposition: {0}",
                                [| box r.Content.Headers.ContentDisposition.DispositionType |]))

By introducing the tee function and operator you give yourself another tool for taming the imperative code and side-effects that tend to pop up in software projects of any complexity.

2014 in Review

The WordPress.com stats helper monkeys prepared a 2014 annual report for this blog.

Here’s an excerpt:

Madison Square Garden can seat 20,000 people for a concert. This blog was viewed about 63,000 times in 2014. If it were a concert at Madison Square Garden, it would take about 3 sold-out performances for that many people to see it.

Click here to see the complete report.

Busy Week Ahead

[12/15/2014 Update] Due to time concerns FunScript has been dropped from the Indy F# meeting. If you were really looking forward an introduction to FunScript stay tuned – we’ll be coming back to it in a few months.

This is a busy week for me on the community front with talks at multiple Indianapolis user groups. If either of these topics interest you I hope you’ll register and join us.

Indy F#

Double Feature: Type Providers and FunScript
Type Providers

Tuesday, December 16, 7:00 PM
Launch Fishers (info and registration)

On Tuesday I’ll be kicking off an Indy F# double feature by talking about Type Providers. We’ll begin with a short tour of several existing type providers and seeing how they make accessing data virtually effortless. With a good taste of what type providers can do we’ll then look behind the curtain to see how they work by walking through creating a custom type provider that reads ID3 tags from MP3 files.

Brad Pillow will follow with an introduction to building single-page applications with FunScript.

Indy Software Artisans

TypeScript: Bringing Sanity to JavaScript
Thursday, December 18, 5:30 PM
SEP (info and registration)

On Thursday I’ll change gears from F# to TypeScript. If writing JavaScript frustrates you or you just want to be more productive when developing browser-based applications you’ll definitely want to check out TypeScript. This session is not only a tour of TypeScript’s language features but also highlights the resulting JavaScript code. To help showcase how TypeScript can fit into your new or existing projects, the demo application is an AngularJS and Bootstrap application driven entirely by TypeScript.

C# 6.0 – String Interpolation

[7/30/2015] This article was written against a pre-release version of C# 6.0. Be sure to check out the list of my five favorite C# 6.0 features for content written against the release!

I really debated about whether I should write about C#’s upcoming string interpolation feature yet. On one hand it’s an interesting feature that I’m looking forward to. On the other hand, it has already been announced that the feature is going to change from its implementation in the current preview. With that in mind I decided that it’s interesting enough to go ahead and write about it using the current syntax but highlight how it will change, much like how it has been done in the feature description document.

When I first heard that string interpolation was coming to C# I immediately experienced flashbacks to the very early days of my career when I was working with some Perl scripts. I really hated working with the language but something that always stuck with me and I missed when jumping to other languages was its string interpolation feature.

At a glance, Perl’s string interpolation feature let us embed variable names inside string literals and the compiler would handle the details of replacing the variable name with the value. My Perl is rusty to say the least but a simple example would essentially look like this:

my $name = "Dave";
print "My name is $name";

Upon execution, the script would write out the following text:

My name is Dave

Side note: I think this is the first time Perl has appeared on this blog. Hopefully it’ll be the last!

Perl’s implementation is more advanced than I’ve shown in this example but it clearly shows the usefulness of the feature. When .NET finally came along and I learned about String.Format I had hopes that it could evolve into something like the Perl feature described above. String.Format is certainly a useful method but it can quickly become a maintenance headache.

Traditional format strings have a number of problems each stemming from the index-based hole approach. First, each value must be supplied in the order that corresponds to the index which isn’t necessarily the order that the values appear in the string. Next, as the number of holes increases, it can be difficult to discern what each hole represents. This isn’t normally a problem for strings with only a few holes but consider the nightmare of keeping indices straight on a format string with more than 50 holes like I once encountered. Finally, String.Format validates only that enough values were supplied to fill each of the holes but if values were provided than there are holes there’s not even a compiler warning. Combine this with one of those 57-hole strings and good luck finding which indices are off or which values should be removed.

C#’s string interpolation aims to fix each of the aforementioned problems. The current implementation uses a slightly clunky version of the traditional format string syntax in that each hole must be prefixed with a backslash. Here’s how the previous example would be written in C# 6.0 using the syntax that’s in the current preview:

var name = "Dave";
WriteLine("My name is \{name}");

Just as in the Perl example, the compiler will resolve the name and fill the hole with the appropriate value. What’s more is that the compiler also verifies that each name exists in the current context and flags anything it can’t resolve as an error.

Per the upcoming features document, this syntax will be changed to something a bit friendlier. Rather than prefixing each hole with a backslash, the string will be identified as an interpolated string by prefixing it with a dollar sign like this:

var name = "Dave";
WriteLine($"My name is {name}");

In this trivial example the net effect on the code is moving and replacing a single character but it’s easy to imagine more complex interpolated strings becoming significantly shorter. (There will also be a FormattedString class added to the System.Runtime.CompilerServices namespace to facilitate custom formatting via the IFormattable interface but I won’t cover that in this article).

That interpolated strings (in either form) closely resemble traditional format strings is not entirely coincidental because ultimately, each interpolated string is syntactic sugar for invoking String.Format. Essentially, the compiler replaces each of the named holes with indexed holes and constructs the value array from the provided names. The benefit of this is that anything you can do with traditional format strings such as including alignment and format specifiers also is also possible with interpolated strings. For instance, we could easily represent a date in ISO 8601 format as follows:

"Current Date and Time (UTC): \{DateTime.UtcNow:o}"

So that’s C#’s string interpolation feature in a nutshell and I’m pretty excited about the direction it’s going because it’ll gradually clean up a lot of code. Since the feature is still under development there’s an active discussion in progress over on the Roslyn site. If you’re interested in seeing some of the thought process behind where this feature is going I encourage you to check it out.