Buried Treasures

Several months ago my wife and I decided that our house was too big for us and it was time to downsize.  Moving is a pain in its own right but when you’re going from a 2800 square foot space to a 1900 square foot space you start thinking a bit more about what you can get rid of.  We still had lots of boxes tucked away in various closets around the old house but we I didn’t want to just start tossing them into the recycle bin without seeing what was inside.  Little did I remember what treasures were waiting for me.  What follows is a small sampling of the more entertaining finds.  Enjoy!


Google Apps and Mobile Phones

I’ve been using Google Apps for my personal email and calendar solution for a little under a year. With a little DNS magic in my WordPress configuration I get to use gmail with my davefancher.com domain and host my blog here at WordPress. I’ve been really happy with this configuration since I started using it.  Recently the Google Apps service got even better since they expanded it to include most of their other services like reader and youtube. One of the things that this change brought though was an account migration. Today I encountered my first problem and it was a result of the service enhancement.

This evening I went to look at my inbox on my phone (Samsung Focus) and was presented with a not so nice message about using a Google Apps account that isn’t configured to work with mobile devices. The error code was 85010020. The instructions included in the message weren’t particularly helpful but after a bit of searching I ran across a post that gave a bit more info that wasn’t particularly clear but at least provided some direction.  In case anyone else runs into this I wanted to provide a bit more detail and hopefully reduce the pain of trying to find the settings.

As the post I mentioned says there are two things that need to be done to allow syncing mobile devices:

  1. The Google Sync service must be selected for the domain
  2. Google Sync must be enabled for the domain
Both of these settings are found in the Google Apps control panel.  If you’re experiencing this problem follow these steps to resolve it:
  1. Log in to your Google Apps control panel.  This should be http://www.google.com/a/
  2. Click to the “Organization & users” tab
  3. Click the “Services” link
    Services Tab
  4. Locate Google Sync in the services List
  5. Ensure that Google Sync is “On”
    Google Sync Service
  6. If Google Sync was off click the “Save changes” button that appeared at the bottom of the page.
  7. Click to the “Settings” tab
  8. Select the “Mobile” option from the list on the left
  9. Check the “Enable Google Sync” box
    Enable Google Sync
  10. Click the “Save changes” button that appeared at the bottom of the page.
You should now be able to sync your mobile device.

I’m Kinected

I’ve been pretty excited about Project Natal Kinect (sorry, I still like the old name) since I first heard about it.  I’ve been watching demo videos for months and even early ones looked promising.  Deep down though I’ve feared that it could turn out to be as bad as the Sega Activator.  Yesterday Microsoft released Kinect and now we can all see it in action.

I thought about pre-ordering one but ultimately passed on it.  Esther and I decided it would make a nice family holiday present.  Let’s just say that Christmas came a bit early this year.  This morning a friend mentioned on twitter that he’d found one at Walmart.  He inspired me to take a chance and see if the Target down the road from my house would have one available.  There were two on the shelf so I bought one along with a copy of Kinectimals to try to keep Nadia (my one year old) entertained.  After a long day at work we finally got to try it out.  So how’d Microsoft do?

Initial setup was REALLY easy.  I have one of the new Xbox 360 S consoles so all I needed to do was connect the device to the Kinect port on the back of the console.  There’s also an included power adapter to connect it to a USB port on the older consoles.  Once the device was connected and the console turned on some drivers were installed and it automatically took me through a configuration/tutorial.

Navigating through menus and selecting items typically involves holding some position for a few seconds.  Although the delay seemed like it could be a bit shorter it was never terribly annoying.  What is annoying though is that Kinect isn’t as deeply integrated as it should be.  Instead of controlling the Xbox dashboard directly there’s this “Kinect Hub” that’s accessed either by waving at the device or with a voice command.  Both methods have worked reliably and about equally well.  Once inside the hub there’s a limited subset of actions that can be taken.  We can play whatever is in the tray, watch ESPN, listen to last.fm, edit our avatars, but not much more.

Voice commands work really, really well.  In some cases they’re easier than gestures.  The biggest problem with the voice commands is that they don’t go deep enough.  Aside from saying “Xbox………Kinect” to access the Kinect Hub they only work at the top level of the hub (with a few exceptions like last.fm).  Once you enter a section you’re forced to use gestures…even with videos.  That brings me to my biggest complaint about the system.

The only thing I’ve run into that’s truly aggravating, maddening even, is that there are no voice controls for most video playback.  Using voice control for video playback was one of the biggest selling points for me and was featured pretty in some of the demo videos.  Given how accurate the voice recognition system is it’s really discouraging that this feature is missing and I really hope that there’s another console update soon to address this.  I also wish that the voice command for opening the tray was available when there’s a disc in there.  It seems odd that it’s only enabled when the tray is empty.  Maybe I’d like the tray to be open when I get to the console.

As far as the gaming experience goes I’ve only played Kinect Adventures and Kinectimals.  Both are very Wii-esque but they’re far more engaging and enjoyable.

Esther and I both got several hours of entertainment from Kinect Adventures.  Despite the Wii-like nature of the mini-games the game play experience itself is completely different and unique.  With Kinect we’re not limited to waving a wand at the TV.  Instead we get a full-body workout where we swing our arms and legs around, walk, jump, and stretch.  With Kinect there’s no worry about throwing a remote through the television or a balance board complaining because you pushed too hard.

Kinectimals is incredibly cute.  Many scenes were met with an “Awwwwww” from Esther and/or happy shrieks from Nadia who really seemed to enjoy watching.  Although Kinectimals does engage the full body through actions like jumping and spinning most of the time (so far) has really just been trying to “throw” random objects at other random objects while a tiger kitten runs around.  The throwing mechanics didn’t seem particularly accurate but I managed to adjust to it pretty well.  I can’t say that it kept my attention all that long but I guess I’d probably enjoy it a bit more if I were a five year old girl which is what it really seems to be targeting.

Overall I think Microsoft really hit the mark with Kinect.  While it does have some problems the majority of them seem addressable (and I hope Microsoft does it soon).  At this point it’s certainly not ready for every type of game.  I struggle to see how some types of games would work with Kinect (shooters in particular – Playstation Move and Wii seem much more suitable for these) but if you’re looking for the types of titles currently available it’s definitely a worthy addition.  I see plenty of room for real-time strategy and fighting games in the future but it’ll be fun to see how this product evolves.

Moving to Live Sync

While I was at my in-law’s house over the weekend I wanted to do some work on the PC I just upgraded.  I’ve been using Live Mesh for quite a while and have been happy with it overall.  When I went to the Live Mesh site I saw a note telling me that Live Mesh was being replaced by Live Sync.  Great, time to migrate…

Tonight I downloaded the installer package for Windows Live to install on my two primary systems.  On both systems I deselected most of the options since I really only wanted Live Writer and Live Sync.  When the installer reached the Live Sync portion it notified me that Live Mesh would be removed.  The install continued without error and Live Sync started without a problem.  I activated remote access for both systems then tried to establish a connection.  That’s where the problems started.

Every time I tried to establish a connection it would fail.  I found nothing in the event logs and disabling the firewall didn’t help either.  After a bit of hunting I ran across a forum post (sorry, I lost the link doing the reboot shuffle) that indicated that Live Mesh might not have actually been uninstalled.  I dug around in Program Files (x86) a bit and sure enough, the Live Mesh folder was still there as were all of its contents.

I uninstalled Live Sync from both systems and reinstalled Live Mesh since there was no longer an uninstall option in the Programs and Features Control Panel.  On one system I had to go so far as to disable UAC to reinstall Live Mesh due to an error stating “Product does not support running under an elevated account”  Once Live Mesh was “reinstalled” I was able to uninstall through the Control Panel.  A sanity check of Program Files (x86) showed that Live Mesh had actually been removed this time.

With Live Mesh finally gone I reinstalled Live Sync on both systems and enabled remote access.  I tried testing the remote desktop connection again and it worked like a charm.  I only have one more system to do this on but the lesson has been learned: remove Live Mesh first!

TFS2010: Shelving

Edit (8/31/2010): The content of this post has been incorporated into my more comprehensive Everyday TFS post.  If you’re looking for a general guide to being more productive with TFS on a day-to-day basis you may consider starting there instead.

A nice feature of TFS is that it allows developers to put aside, or shelve, a set of changes without actually committing them. This can be useful when you need to revert some changes so they don’t conflict with another change or when you need to transfer some code to another developer or workstation.  Like so many things in TFS the shelve feature can be useful but is hindered by poor tool support.  Hopefully the tips presented here can reduce some of the headaches associated with the feature and help you use it to its full potential.

Microsoft made it really easy to create a shelveset.  The Shelve dialog is virtually identical to the Check-in dialog so I won’t go into detail about its operation.  Shelvesets can be created through any of the following methods:

  • File -> Source Control -> Shelve Pending Changes…
  • From the Pending Changes window (View -> Other Windows -> Pending Changes) switch to the Source Files panel and click the Shelve button.
  • Right click on a file or folder in Solution Explorer and select Shelve Pending Changes…
  • Right click on a file or folder in Source Control Explorer select Shelve Pending Changes…

Although Microsoft made it easy to create shelvesets they really fell short on retrieving them.  Unless you’ve been observant when using TFS you’ll probably begin by hunting for the elusive unshelve button.  Unlike when creating a shelveset where there are access points in places we use regularly, there are only two places to go (that I know of) for retrieving one and they’re both kind of buried.

  • File -> Source Control -> Unshelve Pending Changes…
  • From the Pending Changes window (View -> Other Windows -> Pending Changes) switch to the Source Files panel and click the Unshelve button.

The unshelve dialog lists all of the shelvesets created by the user listed in the Owner name field.  By default the owner name is set to the current user but replacing the name with another user name will allow finding the shelvesets created by that user.  Unfortunately there is no way to search for user names so you’ll have to know the name before opening the dialog.

After locating the desired shelveset you can examine its contents through the details button, delete it, or unshelve it.  The unshelve command doesn’t really play nice with files that have local changes.  In fact, if you try to unshelve a file that has changed locally you’ll probably get an error dialog talking about a file having an incompatible local change.  Luckily there’s a work-around that, like so many other things in TFS,  involves TFS Power Tools.

  1. Open a Visual Studio command-line
  2. Navigate to the appropriate workspace
  3. Enter the command tfpt unshelve
  4. Locate the shelveset to unshelve
  5. Select the unshelve option – a second dialog box will open listing any conflicts needing resolution

[Review] JavaScript: The Good Parts

As I mentioned in my Working With JavaScript post I’ve started on a new project that’s going to be pretty heavy on JavaScript.  Since I’ve been away from JavaScript for so long I wanted a language refresher but didn’t want to start from scratch.  What I needed was a good book to reactivate those long neglected neurons that store all of my knowledge of the language.  I’ve heard good things about JavaScript: The Good Parts by Douglas Crockford since it was published in 2008 and it was well reviewed on Amazon so I thought that would be a good place to start.

After giving the book a fairly thorough read (I skipped the chapter on Regular Expressions) I have to say that I was disappointed.  Now don’t get me wrong, the book introduced me to a few patterns I hadn’t encountered or thought of before.  It also helped me accomplish my goal of getting reacquainted with JavaScript and reminded me of a few things like the === and !== operators.

I don’t mean to detract from the author’s knowledge of JavaScript.  To the contrary, Mr. Crockford is widely regarded as one of the world’s foremost authorities on JavaScript.  Nor do I mean to sound like the book was entirely bad because it really does have plenty of good information.  Perhaps it’s just testament to the dual nature of the language having both really good parts and really bad parts but this book seems to follow that pattern.  Parts of the book are truly informative but there are other parts that fall flat and really hurt the overall quality.

For me, the most value came from the discussions about:

  • Properly handling data hiding through closures.
  • Properly handling inheritance.
  • Working effectively with arrays.
  • Using subscript notation rather than dot notation to avoid eval.
  • The custom utility methods such as Object.createObject.superior, and memoizer.

Many of my issues with this book stem from some statements in the preface:

[The book] is intended for programmers who, by happenstance or curiosity, are venturing into JavaScript for the first time.

This is not a book for beginners.

At first glance these quotes seem contradictory.  I understand that it is intended for experienced programmers just starting out with JavaScript but if that is the case why does the book spend time explaining what objects can be used for, defining variable scope, defining inheritance, and defining recursion? Aren’t these basic concepts in software development that experienced programmers should already be familiar with?

This is not a reference book.

This quote is misleading. Sure, JavaScript: The Good Parts isn’t a comprehensive reference of the entire JavaScript language but not a reference book at all?  Chapter 8 is a listing of methods on the built-in objects, Appendix C is a JSLint reference, and Appendix E is a JSON reference.

Including the appendices and index the book is only about 150 pages long but I found it to be full of fluff.  It really seemed like the author was struggling to reach 150 pages.

  • Chapter 8 is essentially a condensed restating of everything that came before in that it is a listing of the “good” methods of the built-in objects.  Adding length to this chapter is an example implementation of Array.splice. If JavaScript provides it and it’s one of the “good parts” why do I need an alternative implementation?
  • Chapter 9 is four pages describing the coding conventions used in the book (shouldn’t this have been at the beginning?) and why style is important (shouldn’t experienced programmers already be aware of this?).
  • Chapter 10 is two and a half pages about the author’s inspiration for writing the book and why excluding the bad parts is important.
  • Appendix C: JSLint seemed really out of place.  The preface insisted that the focus of the book was exclusively on the JavaScript language but a utility for checking for potential problems in code gets a nine page appendix?
  • Appendix E: JSON explicitly states “It is recommended that you always use JSON.parse instead of eval to defend against server incompetence,” but then spends the next four and a half pages on a code listing showing an implementation of a JSON parser! If it is recommended that JSON.parse always be used why include the listing?
  • Railroad diagrams are useful but many of them take up huge amounts of space.  The fact that they were repeated in Appendix D just stretches the length of the book another 10 pages.

Although the book has a repetitive, drawn-out structure I still think the information it contains is valuable enough to make it worth reading. As a supplement I highly recommend watching Mr. Crockford’s Google Tech Talk covering much of the same material.  The video covers many of the book’s high points and even touches on some topics not included in the book.  In some ways I actually think the video is better than the book even though it doesn’t get quite the same amount of detail on some of the topics.

DaveFancher.com Reloaded

I’ve owned DaveFancher.com for as long as I can remember but I’ve been neglecting it for the past few years.  I’ve neglected it so much that I’ve actually been paying a Web host for e-mail.  That came to an abrupt end tonight.

When I started the site I rolled my own blog and for the most part, it met my needs.  I had a rudimentary rich text editor, I had attachments, I had commenting, I think I even had an RSS feed.  I ultimately got to a point where I wanted to allow drafts, versioning, trackbacks (not that they’d ever be used!), and even ping sites like Technorati but I didn’t have the desire to build any of it.  I just wanted to write.  By the time I reached this point blogging software was coming of age so I started seeking other solutions.

For a while I used Blogger (Blogspot at the time) but I never really liked it although I couldn’t really explain why.  After a long but unproductive run with Blogger/Blogspot I went hunting again.  I checked a few of my friends’ blogs and many of them were using WordPress so I decided to check it out and was hooked almost immediately.

One of the first things I looked into with WordPress was how to self-host.  After all, I was paying for it, right?  Unfortunately it required MySQL which my host didn’t support.  I was kind of disappointed but looked at the hosted option anyway.  WordPress made migrating from Blogger really easy and was so feature-rich I knew it was what I was looking for.  DaveFancher.com would continue to appear abandoned but I wasn’t about to give up my e-mail address.

Fast forward to this evening.  I took the plunge.  I purchased the domain add-on for my WordPress blog, updated the name servers with my registrar, and waited… Amazingly it only took about an hour for the changes to take effect.  But what about e-mail?

As I mentioned, the only reason I’ve really been hanging on to the host was e-mail but the increase in spam over the past few months was becoming an annoyance and was a huge influence on my decision.  Luckily Google offers a free version of Google Apps that makes GMail available to custom domains.  WordPress’s recent addition of DNS editing made it simple to allow Google Apps to manage e-mail.  All I had to do was enter the verification code from Google Apps to let WordPress generate some entries and manually add a few extra CNAME entries to simplify some access.

In the few months since I switched to WordPress I’ve been posting with more frequency than ever before.  Tonight’s changes should give me even more motivation to keep it up.  Now, just a few hours after starting the process DaveFancher.com has a new lease on life thanks to WordPress and Google.