Several months ago my wife and I decided that our house was too big for us and it was time to downsize. Moving is a pain in its own right but when you’re going from a 2800 square foot space to a 1900 square foot space you start thinking a bit more about what you can get rid of. We still had lots of boxes tucked away in various closets around the old house but we I didn’t want to just start tossing them into the recycle bin without seeing what was inside. Little did I remember what treasures were waiting for me. What follows is a small sampling of the more entertaining finds. Enjoy!
Uncategorized
Google Apps and Mobile Phones
I’ve been using Google Apps for my personal email and calendar solution for a little under a year. With a little DNS magic in my WordPress configuration I get to use gmail with my davefancher.com domain and host my blog here at WordPress. I’ve been really happy with this configuration since I started using it. Recently the Google Apps service got even better since they expanded it to include most of their other services like reader and youtube. One of the things that this change brought though was an account migration. Today I encountered my first problem and it was a result of the service enhancement.
This evening I went to look at my inbox on my phone (Samsung Focus) and was presented with a not so nice message about using a Google Apps account that isn’t configured to work with mobile devices. The error code was 85010020. The instructions included in the message weren’t particularly helpful but after a bit of searching I ran across a post that gave a bit more info that wasn’t particularly clear but at least provided some direction. In case anyone else runs into this I wanted to provide a bit more detail and hopefully reduce the pain of trying to find the settings.
As the post I mentioned says there are two things that need to be done to allow syncing mobile devices:
- The Google Sync service must be selected for the domain
- Google Sync must be enabled for the domain
- Log in to your Google Apps control panel. This should be http://www.google.com/a/
- Click to the “Organization & users” tab
- Click the “Services” link
- Locate Google Sync in the services List
- Ensure that Google Sync is “On”
- If Google Sync was off click the “Save changes” button that appeared at the bottom of the page.
- Click to the “Settings” tab
- Select the “Mobile” option from the list on the left
- Check the “Enable Google Sync” box
- Click the “Save changes” button that appeared at the bottom of the page.
I’m Kinected
I’ve been pretty excited about Project Natal Kinect (sorry, I still like the old name) since I first heard about it. I’ve been watching demo videos for months and even early ones looked promising. Deep down though I’ve feared that it could turn out to be as bad as the Sega Activator. Yesterday Microsoft released Kinect and now we can all see it in action.
I thought about pre-ordering one but ultimately passed on it. Esther and I decided it would make a nice family holiday present. Let’s just say that Christmas came a bit early this year. This morning a friend mentioned on twitter that he’d found one at Walmart. He inspired me to take a chance and see if the Target down the road from my house would have one available. There were two on the shelf so I bought one along with a copy of Kinectimals to try to keep Nadia (my one year old) entertained. After a long day at work we finally got to try it out. So how’d Microsoft do?
Initial setup was REALLY easy. I have one of the new Xbox 360 S consoles so all I needed to do was connect the device to the Kinect port on the back of the console. There’s also an included power adapter to connect it to a USB port on the older consoles. Once the device was connected and the console turned on some drivers were installed and it automatically took me through a configuration/tutorial.
Navigating through menus and selecting items typically involves holding some position for a few seconds. Although the delay seemed like it could be a bit shorter it was never terribly annoying. What is annoying though is that Kinect isn’t as deeply integrated as it should be. Instead of controlling the Xbox dashboard directly there’s this “Kinect Hub” that’s accessed either by waving at the device or with a voice command. Both methods have worked reliably and about equally well. Once inside the hub there’s a limited subset of actions that can be taken. We can play whatever is in the tray, watch ESPN, listen to last.fm, edit our avatars, but not much more.
Voice commands work really, really well. In some cases they’re easier than gestures. The biggest problem with the voice commands is that they don’t go deep enough. Aside from saying “Xbox………Kinect” to access the Kinect Hub they only work at the top level of the hub (with a few exceptions like last.fm). Once you enter a section you’re forced to use gestures…even with videos. That brings me to my biggest complaint about the system.
The only thing I’ve run into that’s truly aggravating, maddening even, is that there are no voice controls for most video playback. Using voice control for video playback was one of the biggest selling points for me and was featured pretty in some of the demo videos. Given how accurate the voice recognition system is it’s really discouraging that this feature is missing and I really hope that there’s another console update soon to address this. I also wish that the voice command for opening the tray was available when there’s a disc in there. It seems odd that it’s only enabled when the tray is empty. Maybe I’d like the tray to be open when I get to the console.
As far as the gaming experience goes I’ve only played Kinect Adventures and Kinectimals. Both are very Wii-esque but they’re far more engaging and enjoyable.
Esther and I both got several hours of entertainment from Kinect Adventures. Despite the Wii-like nature of the mini-games the game play experience itself is completely different and unique. With Kinect we’re not limited to waving a wand at the TV. Instead we get a full-body workout where we swing our arms and legs around, walk, jump, and stretch. With Kinect there’s no worry about throwing a remote through the television or a balance board complaining because you pushed too hard.
Kinectimals is incredibly cute. Many scenes were met with an “Awwwwww” from Esther and/or happy shrieks from Nadia who really seemed to enjoy watching. Although Kinectimals does engage the full body through actions like jumping and spinning most of the time (so far) has really just been trying to “throw” random objects at other random objects while a tiger kitten runs around. The throwing mechanics didn’t seem particularly accurate but I managed to adjust to it pretty well. I can’t say that it kept my attention all that long but I guess I’d probably enjoy it a bit more if I were a five year old girl which is what it really seems to be targeting.
Overall I think Microsoft really hit the mark with Kinect. While it does have some problems the majority of them seem addressable (and I hope Microsoft does it soon). At this point it’s certainly not ready for every type of game. I struggle to see how some types of games would work with Kinect (shooters in particular – Playstation Move and Wii seem much more suitable for these) but if you’re looking for the types of titles currently available it’s definitely a worthy addition. I see plenty of room for real-time strategy and fighting games in the future but it’ll be fun to see how this product evolves.
Moving to Live Sync
While I was at my in-law’s house over the weekend I wanted to do some work on the PC I just upgraded. I’ve been using Live Mesh for quite a while and have been happy with it overall. When I went to the Live Mesh site I saw a note telling me that Live Mesh was being replaced by Live Sync. Great, time to migrate…
Tonight I downloaded the installer package for Windows Live to install on my two primary systems. On both systems I deselected most of the options since I really only wanted Live Writer and Live Sync. When the installer reached the Live Sync portion it notified me that Live Mesh would be removed. The install continued without error and Live Sync started without a problem. I activated remote access for both systems then tried to establish a connection. That’s where the problems started.
Every time I tried to establish a connection it would fail. I found nothing in the event logs and disabling the firewall didn’t help either. After a bit of hunting I ran across a forum post (sorry, I lost the link doing the reboot shuffle) that indicated that Live Mesh might not have actually been uninstalled. I dug around in Program Files (x86) a bit and sure enough, the Live Mesh folder was still there as were all of its contents.
I uninstalled Live Sync from both systems and reinstalled Live Mesh since there was no longer an uninstall option in the Programs and Features Control Panel. On one system I had to go so far as to disable UAC to reinstall Live Mesh due to an error stating “Product does not support running under an elevated account” Once Live Mesh was “reinstalled” I was able to uninstall through the Control Panel. A sanity check of Program Files (x86) showed that Live Mesh had actually been removed this time.
With Live Mesh finally gone I reinstalled Live Sync on both systems and enabled remote access. I tried testing the remote desktop connection again and it worked like a charm. I only have one more system to do this on but the lesson has been learned: remove Live Mesh first!
TFS2010: Shelving
Edit (8/31/2010): The content of this post has been incorporated into my more comprehensive Everyday TFS post. If you’re looking for a general guide to being more productive with TFS on a day-to-day basis you may consider starting there instead.
A nice feature of TFS is that it allows developers to put aside, or shelve, a set of changes without actually committing them. This can be useful when you need to revert some changes so they don’t conflict with another change or when you need to transfer some code to another developer or workstation. Like so many things in TFS the shelve feature can be useful but is hindered by poor tool support. Hopefully the tips presented here can reduce some of the headaches associated with the feature and help you use it to its full potential.
Microsoft made it really easy to create a shelveset. The Shelve dialog is virtually identical to the Check-in dialog so I won’t go into detail about its operation. Shelvesets can be created through any of the following methods:
- File -> Source Control -> Shelve Pending Changes…
- From the Pending Changes window (View -> Other Windows -> Pending Changes) switch to the Source Files panel and click the Shelve button.
- Right click on a file or folder in Solution Explorer and select Shelve Pending Changes…
- Right click on a file or folder in Source Control Explorer select Shelve Pending Changes…
Although Microsoft made it easy to create shelvesets they really fell short on retrieving them. Unless you’ve been observant when using TFS you’ll probably begin by hunting for the elusive unshelve button. Unlike when creating a shelveset where there are access points in places we use regularly, there are only two places to go (that I know of) for retrieving one and they’re both kind of buried.
- File -> Source Control -> Unshelve Pending Changes…
- From the Pending Changes window (View -> Other Windows -> Pending Changes) switch to the Source Files panel and click the Unshelve button.
The unshelve dialog lists all of the shelvesets created by the user listed in the Owner name field. By default the owner name is set to the current user but replacing the name with another user name will allow finding the shelvesets created by that user. Unfortunately there is no way to search for user names so you’ll have to know the name before opening the dialog.
After locating the desired shelveset you can examine its contents through the details button, delete it, or unshelve it. The unshelve command doesn’t really play nice with files that have local changes. In fact, if you try to unshelve a file that has changed locally you’ll probably get an error dialog talking about a file having an incompatible local change. Luckily there’s a work-around that, like so many other things in TFS, involves TFS Power Tools.
- Open a Visual Studio command-line
- Navigate to the appropriate workspace
- Enter the command tfpt unshelve
- Locate the shelveset to unshelve
- Select the unshelve option – a second dialog box will open listing any conflicts needing resolution
[Review] JavaScript: The Good Parts
As I mentioned in my Working With JavaScript post I’ve started on a new project that’s going to be pretty heavy on JavaScript. Since I’ve been away from JavaScript for so long I wanted a language refresher but didn’t want to start from scratch. What I needed was a good book to reactivate those long neglected neurons that store all of my knowledge of the language. I’ve heard good things about JavaScript: The Good Parts by Douglas Crockford since it was published in 2008 and it was well reviewed on Amazon so I thought that would be a good place to start.
After giving the book a fairly thorough read (I skipped the chapter on Regular Expressions) I have to say that I was disappointed. Now don’t get me wrong, the book introduced me to a few patterns I hadn’t encountered or thought of before. It also helped me accomplish my goal of getting reacquainted with JavaScript and reminded me of a few things like the === and !== operators.
I don’t mean to detract from the author’s knowledge of JavaScript. To the contrary, Mr. Crockford is widely regarded as one of the world’s foremost authorities on JavaScript. Nor do I mean to sound like the book was entirely bad because it really does have plenty of good information. Perhaps it’s just testament to the dual nature of the language having both really good parts and really bad parts but this book seems to follow that pattern. Parts of the book are truly informative but there are other parts that fall flat and really hurt the overall quality.
For me, the most value came from the discussions about:
- Properly handling data hiding through closures.
- Properly handling inheritance.
- Working effectively with arrays.
- Using subscript notation rather than dot notation to avoid eval.
- The custom utility methods such as Object.create, Object.superior, and memoizer.
Many of my issues with this book stem from some statements in the preface:
[The book] is intended for programmers who, by happenstance or curiosity, are venturing into JavaScript for the first time.
This is not a book for beginners.
At first glance these quotes seem contradictory. I understand that it is intended for experienced programmers just starting out with JavaScript but if that is the case why does the book spend time explaining what objects can be used for, defining variable scope, defining inheritance, and defining recursion? Aren’t these basic concepts in software development that experienced programmers should already be familiar with?
This is not a reference book.
This quote is misleading. Sure, JavaScript: The Good Parts isn’t a comprehensive reference of the entire JavaScript language but not a reference book at all? Chapter 8 is a listing of methods on the built-in objects, Appendix C is a JSLint reference, and Appendix E is a JSON reference.
Including the appendices and index the book is only about 150 pages long but I found it to be full of fluff. It really seemed like the author was struggling to reach 150 pages.
- Chapter 8 is essentially a condensed restating of everything that came before in that it is a listing of the “good” methods of the built-in objects. Adding length to this chapter is an example implementation of Array.splice. If JavaScript provides it and it’s one of the “good parts” why do I need an alternative implementation?
- Chapter 9 is four pages describing the coding conventions used in the book (shouldn’t this have been at the beginning?) and why style is important (shouldn’t experienced programmers already be aware of this?).
- Chapter 10 is two and a half pages about the author’s inspiration for writing the book and why excluding the bad parts is important.
- Appendix C: JSLint seemed really out of place. The preface insisted that the focus of the book was exclusively on the JavaScript language but a utility for checking for potential problems in code gets a nine page appendix?
- Appendix E: JSON explicitly states “It is recommended that you always use JSON.parse instead of eval to defend against server incompetence,” but then spends the next four and a half pages on a code listing showing an implementation of a JSON parser! If it is recommended that JSON.parse always be used why include the listing?
- Railroad diagrams are useful but many of them take up huge amounts of space. The fact that they were repeated in Appendix D just stretches the length of the book another 10 pages.
Although the book has a repetitive, drawn-out structure I still think the information it contains is valuable enough to make it worth reading. As a supplement I highly recommend watching Mr. Crockford’s Google Tech Talk covering much of the same material. The video covers many of the book’s high points and even touches on some topics not included in the book. In some ways I actually think the video is better than the book even though it doesn’t get quite the same amount of detail on some of the topics.
DaveFancher.com Reloaded
I’ve owned DaveFancher.com for as long as I can remember but I’ve been neglecting it for the past few years. I’ve neglected it so much that I’ve actually been paying a Web host for e-mail. That came to an abrupt end tonight.
When I started the site I rolled my own blog and for the most part, it met my needs. I had a rudimentary rich text editor, I had attachments, I had commenting, I think I even had an RSS feed. I ultimately got to a point where I wanted to allow drafts, versioning, trackbacks (not that they’d ever be used!), and even ping sites like Technorati but I didn’t have the desire to build any of it. I just wanted to write. By the time I reached this point blogging software was coming of age so I started seeking other solutions.
For a while I used Blogger (Blogspot at the time) but I never really liked it although I couldn’t really explain why. After a long but unproductive run with Blogger/Blogspot I went hunting again. I checked a few of my friends’ blogs and many of them were using WordPress so I decided to check it out and was hooked almost immediately.
One of the first things I looked into with WordPress was how to self-host. After all, I was paying for it, right? Unfortunately it required MySQL which my host didn’t support. I was kind of disappointed but looked at the hosted option anyway. WordPress made migrating from Blogger really easy and was so feature-rich I knew it was what I was looking for. DaveFancher.com would continue to appear abandoned but I wasn’t about to give up my e-mail address.
Fast forward to this evening. I took the plunge. I purchased the domain add-on for my WordPress blog, updated the name servers with my registrar, and waited… Amazingly it only took about an hour for the changes to take effect. But what about e-mail?
As I mentioned, the only reason I’ve really been hanging on to the host was e-mail but the increase in spam over the past few months was becoming an annoyance and was a huge influence on my decision. Luckily Google offers a free version of Google Apps that makes GMail available to custom domains. WordPress’s recent addition of DNS editing made it simple to allow Google Apps to manage e-mail. All I had to do was enter the verification code from Google Apps to let WordPress generate some entries and manually add a few extra CNAME entries to simplify some access.
In the few months since I switched to WordPress I’ve been posting with more frequency than ever before. Tonight’s changes should give me even more motivation to keep it up. Now, just a few hours after starting the process DaveFancher.com has a new lease on life thanks to WordPress and Google.
Test Framework Philosophy
My development team is working to implement and enforce more formal development processes than we have used in the past. Part of this process involves deciding on which unit test framework to use going forward. Traditionally we have used NUnit and it has worked well for our needs but now that we’re implementing Visual Studio Team System we now have MSTest available. This has sparked a bit of a debate as to whether we should stick with NUnit or migrate to MSTest. As we examine the capabilities of each framework and weigh each of their advantages and disadvantages I’ve come to realize that the decision is a philosophical matter.
MSTest has a bit of a bad reputation. The general consensus seems to be that MSTest sucks. A few weeks ago I would have thoroughly agreed with that assessment but recently I’ve come to reconsider that position. The problem isn’t that MSTest sucks, it’s that MSTest follows a different paradigm than some other frameworks as to what a test framework should provide.
My favorite feature of NUnit is its rich, expressive syntax. I especially like NUnit’s constraint-based assertion model. By comparison, MSTest’s assertion model is limited, even restrictive if you’re used to the rich model offered by NUnit. Consider the following “classic” assertions from both frameworks:
NUnit | MSTest | |
---|---|---|
Equality/Inequality | Assert.AreEqual(e, a) Assert.AreNotEqual(e, a) Assert.Greater (e, a) Assert.LessOrEqual(e, a) |
Assert.AreEqual (e, a) Assert.AreNotEqual (e, a) Assert.IsTrue(a > e) Assert.IsTrue(a <= e) |
Boolean Values | Assert.IsTrue(a) Assert.IsFalse(a) |
Assert.IsTrue(a) Assert.IsFalse(a) |
Reference | Assert.AreSame(e, a) Assert.AreNotSame(e, a) |
Assert.AreSame(e, a) Assert.AreNotSame(e, a) |
Null | Assert.IsNull(a) Assert.IsNotNull(a) |
Assert.IsNull(a) Assert.IsNotNull(a) |
e – expected value a – actual value |
They’re similar aren’t they? Each of the assertions listed are functionally equivalent but notice how the Greater and LessOrEqual assertions are handled in MSTest. MSTest doesn’t provide assertion methods for these cases but instead relies on evaluating expressions to define the condition. This difference above all else defines the divergence in philosophy between the two frameworks. So why is this important?
Readability
Unit tests should be readable. In unit tests we often break established conventions and/or violate the coding standards we use in our product code. We sacrifice brevity in naming with Really_Long_Snake_Case_Names_So_They_Can_Be_Read_In_The_Test_Runner_By_Non_Developers. We sacrifice DRY to keep code together. All of these things are done in the name of readability.
The Readability Debate
Argument 1: A rich assertion model can unnecessarily complicate a suite of tests particularly when multiple developers are involved.
Rich assertion models make it possible to assert the same condition in a variety of ways resulting in a lack of consistency. Readability naturally falls out of a week assertion model because the guess work of which form of an assertion is being used is removed.
Argument 2: With a rich model there is no guess work because assertions are literally spelled out as explicitly as they can be.
Assert.Greater(e, a) doesn’t require a mental context shift from English to parsing an expression. The spelled out statement of intent is naturally more readable for developers and non-developers alike.
My Position
I strongly agree with argument 2. When I’m reading code I derive as much meaning from the method name as I can before examining the arguments. “Greater” conveys more contextual information than “IsTrue.” When I see “IsTrue” I immediately need to ask “What’s true?” then delve into an argument which could be anything that returns a boolean value. In any case I still need to think about what condition is supposed to be true.
NUnit takes expressiveness to another level with its constraint-based assertions. The table below lists the same assertions as the table above when written as constraint-based assertions.
Equality/Inequality | Assert.That(e, Is.EqualTo(a)) Assert.That(e, Is.Not.EqualTo(a)) Assert.That(e, Is.GreaterThan(a)) Assert.That(e, Is.LessThanOrEqualTo(a)) |
---|---|
Boolean Values | Assert.That(a, Is.True) Assert.That(a, Is.False) |
Reference | Assert.That(a, Is.SameAs(e)) Assert.That(a, Is.Not.SameAs(e)) |
Null | Assert.That(a, Is.Null) Assert.That(a, Is.Not.Null) |
e – expected value a – actual value |
Constraint-based assertions are virtually indistinguishable from English. To me this is about as readable as code can be.
Even the frameworks with a weak assertion model provide multiple ways of accomplishing the same task. Is it not true that Assert.AreEqual(e, a) is functionally equivalent to Assert.IsTrue(e == a)? Is it not also true that Assert.AreNotEqual(e, a) is functionally equivalent to Assert.IsTrue(e !=a)? Since virtually all assertions ultimately boil down to ensuring that some condition is true and throwing an exception when that condition is not true, shouldn’t weak assertion models be limited to little more than Assert.IsTrue(a)?
Clearly there are other considerations beyond readability when deciding upon a unit test framework but given that much of the power of a given framework is provided by the assertion model it’s among the most important. To me, an expressive assertion model is just as important as the tools associated with the framework.
Your thoughts?
LINQ: IEnumerable to DataTable
Over the past several months I’ve been promoting LINQ pretty heavily at work. Several of my coworkers have jumped on the bandwagon and are realizing how much power is available to them.
This week two of my coworkers were working on unrelated projects but both needed to convert a list of simple objects to a DataTable and asked me for an easy way to do it. LINQ to DataSet provides wonderful functionality for exposing DataTables to LINQ expressions and converting the data into another structure but it doesn’t have anything for turning a collection of objects into a DataTable. Lucky for us LINQ makes this task really easy.
First we need to use reflection to get the properties for the type we’re converting to a DataTable.
var props = typeof(MyClass).GetProperties();
Once we have our property list we build the structure of the DataTable by converting the PropertyInfo[] into DataColumn[]. We can add each DataColumn to the DataTable at one time with the AddRange method.
var dt = new DataTable(); dt.Columns.AddRange( props.Select(p => new DataColumn(p.Name, p.PropertyType)).ToArray() );
Now that the structure is defined all that’s left is to populate the DataTable. This is also trivial since the Add method on the Rows collection has an overload that accepts params object[]
as an argument. With LINQ we can easily build a list of property values for each object, convert that list to an array, and pass it to the Add method.
source.ToList().ForEach( i => dt.Rows.Add(props.Select(p =>; p.GetValue(i, null)).ToArray()) );
That’s all there is to it for collections of simple objects. Those familiar with LINQ to DataSet might note that the example doesn’t use the CopyToDataTable extension method. The main reason for adding the rows directly to the DataTable instead of using CopyToDataTable is that we’d be doing extra work. CopyToDataTable accepts IEnumerable but constrains T to DataRow. In order to make use of the extension method (or its overloads) we still have to iterate over the source collection to convert each item into a DataRow, add each row into a collection, then call CopyToDataTable with that collection. By adding the rows directly to the DataTable we avoid the extra step altogether.
We can now bring the above code together into a functional example. To run this example open LINQPad, change the language selection to C# Program, and paste the code into the snippet editor.
class MyClass { public Guid ID { get; set; } public int ItemNumber { get; set; } public string Name { get; set; } public bool Active { get; set; } } IEnumerable<MyClass> BuildList(int count) { return Enumerable .Range(1, count) .Select( i => new MyClass() { ID = Guid.NewGuid(), ItemNumber = i, Name = String.Format("Item {0}", i), Active = (i % 2 == 0) } ); } DataTable ConvertToDataTable<TSource>(IEnumerable<TSource> source) { var props = typeof(TSource).GetProperties(); var dt = new DataTable(); dt.Columns.AddRange( props.Select(p => new DataColumn(p.Name, p.PropertyType)).ToArray() ); source.ToList().ForEach( i => dt.Rows.Add(props.Select(p => p.GetValue(i, null)).ToArray()) ); return dt; } void Main() { var dt = ConvertToDataTable( BuildList(100) ); // NOTE: The Dump() method below is a LINQPad extension method. // To run this example outside of LINQPad this method // will need to be revised. Console.WriteLine(dt.GetType().FullName); dt.Dump(); }
Of course there are other ways to accomplish this and the full example has some holes but it’s pretty easy to expand. An obvious enhancement would be to rename the ConvertToDataTable method and change it to handle child collections and return a full DataSet.
They Write the Right Stuff
A few days ago someone on Reddit linked to this fastcompany article about the team responsible for building the space shuttle’s on-board software. The main focus of the article is how this team of 260 people consistently releases virtually bug-free software.
This article was really timely for me given some of the Code Camp sessions I attended last weekend. Many of the key points from those sessions were reiterated for me.
Although most of us don’t write software that is not only used in but also controls life and death situations, we as a practitioners of a maturing industry could really benefit by studying and incorporating their practices. The article is a bit long but really worth the read.