My Favorite Projects: Volume One

26 Jun

At my first job out of school, I worked for a cast iron pipe manufacturer. The pipes were stored in an outdoor yard at each plant, and were frequently moved and rearranged by 60-ton fork trucks. The company had existing software to track the movements, but it required the driver to climb down from the fork truck, remember the 6-digit code from a stick on the pipe, climb back in the truck, and manually enter the code and location into a computer mounted in the cab. Naturally, the entry rate was pretty low – below 50% at some plants. This meant that physical inventories (which my team was also in charge of) included a lot of running around the pipe yard looking for a particular bundle of pipe.

The solution that I came up with was to attach a GPS and an RFID reader to the existing computer mounted in the cab, and to attach magnetic RFID tags to each bundle of pipe as they came off the assembly line. The RFID reader was mounted near the fork, so that our software could determine pickups and dropoffs by detecting a tag coming into and going out of range. When a pickup or dropoff occurred, the computer would know the exact location from the GPS antenna, the timestamp, the pipe bundle’s ID code, and who was driving the fork truck.

We also created our own ruggedized, magnetized RFID tags using a standard passive tag, a strip of alumnium, rubber heat-shrink wrap, and strong magnets. A tag could be thrown onto the pipes as they
came off the line, after being associated with the pipe’s database record using the same software that printed the barcode stickers for each pipe. The tags would also be removed before leaving the plant, so that they could be re-associated with new pipes being manufactured.

The last hurdle was to record the actual row the pipe was left in. All the pipe bundles were organized into rows, and physical inventories were structured around these rows. Because the rows didn’t sit perfectly north to south, we needed a way to determine if the truck was within the bounds of a generic polygon. After some research, we figured out that we could create a line segment between the current location and the North Pole (or any point known to be outside the polygon), and then compare that line to each line segment that made up the polygon. If the line segments intersected an odd number of times, the point was inside the polygon (the segment entered one more time that it left), and if they intersected an even number of times, the point was outside the polygon (the segment left the polygon for each time it entered).

Armed with this knowledge, we were able to make a working prototype on one of the fork trucks. The project died, however, when our manager was unable to get funding for a plant-wide trial.


Inside RazorScriptManager

22 Jun

When you add the RazorScriptManager NuGet package to your project, three things happen: Script.cshtml is added to /App_Code, RazorScriptManager.cs is added to /Handlers, and a web.config transform is performed. The first file provides the Razor helper methods used in your views. The second provides the HttpHandler that handles the combining/compression and responds to requests for scripts.axd. The web.config transform adds a couple settings and registers the HttpHandler.


There are four Razor helpers added as part of the NuGet package. Two for adding script references to the response, and two for writing out script tags for the response. One of each type is provided for CSS and JavaScript.

Inside the two Add methods (AddCss() and AddJavaScript()), a new ScriptInfo object is created for the referenced script. That ScriptInfo object contains the script type, local path, CDN path, and whether the script is used site-wide. The ScriptInfo object is then added to a List<ScriptInfo> that’s kept in Session.

@helper AddJavaScript(string localPath, string cdnPath = null, bool siteWide = false) {
	var scriptType = ScriptType.JavaScript;
	//create a session key specifically for javascript ScriptInfo objects
	var key = "__rsm__" + scriptType.ToString();
	//If the List doesn't exist, create it
	if (Session[key] == null) {
		Session[key] = new List();
	//pull out the current (or new) list - it may already have other ScriptInfo objects
	var scripts = Session[key] as List;
	//add the current ScriptInfo
	scripts.Add(new ScriptInfo(Server.MapPath(localPath), cdnPath, scriptType, siteWide));
	//put the list back in Session
	Session[key] = scripts;

In the Output methods (OutputCss() and OutputJavaScript()), the List<ScriptInfo> is extracted from Session. Based on web.config settings, a list of CDN-hosted scripts may be extracted. The helper then writes out <script> or <link> tags for CDN-hosted scripts (if any) and for the HttpHandler path. An MD5 hash is generated from the filenames of the referenced local scripts and appended to the HttpHandler path. This MD5 is used to cache the combined/compressed output in the HttpHandler, and will be explained more in that section.

@helper OutputJavaScript() {
	var scriptType = ScriptType.JavaScript;
	//create a session key specifically for javascript ScriptInfo objects
	var key = "__rsm__" + scriptType.ToString();
	//if no scripts have been added, don't do anything
	if (Session[key] == null) return;
	//pull out the current list from Session
	var scripts = Session[key] as List;
	var cdnScripts = new List();
	//if the web.config says to use CDN-hosted scripts, extract then from the list into cdnScripts
	if (bool.Parse(System.Configuration.ConfigurationManager.AppSettings["UseCDNScripts"])) {
		//get all scripts without a CDN path
		var localScripts = scripts.Where(s => string.IsNullOrWhiteSpace(s.CDNPath)).ToList().ToList();
		//get all scripts that aren't local-only scripts
		cdnScripts = scripts.Except(localScripts).ToList();
		//put the local scripts back into session (CDN scripts are handled here, not in the HttpHandler)
		Session[key] = localScripts;

	//write out the CDN scripts to the response
	foreach (var cdnScript in cdnScripts) {
<script type="text/javascript" src="@cdnScript.CDNPath"></script>}

	//generate a unique hash based on the filenames
	var hash = HttpUtility.UrlEncode(RazorScriptManager.GetHash(scripts));
	//write out a script tag for the HttpHandler using the script type and hash<script type="text/javascript" src="/scripts.axd?type=@scriptType.ToString()&hash=@hash"></script>


The class file for the HttpHandler also contains the definitions for ScriptInfo, ScriptType and ScriptInfoComparer. These classes represent a script reference, the type of script, and a way of comparing two scripts. The comparer is used later to eliminate duplicate script references (e.g. if you have a reference to the same jQuery file on your Layout and a partial view, it will only use one). The RazorScriptManager class itself (which is the actual HttpHandler) provides an instance method for responding to requests (ProcessRequest()) and a static method for generating a hash (GetHash()). GetHash() works by appending the distinct list of script paths into a single string, then generating a standard MD5 hash of that string.

public static string GetHash(IEnumerable scripts) {
	var input = string.Join(string.Empty, scripts.Select(s => s.LocalPath).Distinct());
	var hash = System.Security.Cryptography.MD5.Create().ComputeHash(Encoding.ASCII.GetBytes(input));
	var sb = new StringBuilder();
	for (int i = 0; i < hash.Length; i++)
	return sb.ToString();

ProcessRequest() is a little more involved. First, it determines the type of script being requested from the querystring value. Based on the type, it sets the response’s content type appropriately.

var scriptType = (ScriptType)Enum.Parse(typeof(ScriptType), context.Request.Params["type"]);

switch (scriptType) {
	case ScriptType.JavaScript:
		context.Response.ContentType = @"application/javascript";
	case ScriptType.Stylesheet:
		context.Response.ContentType = @"text/css";

After setting content type, the method checks the application cache to see if a combined/compress script has already been generated for that particular set of files. To do this, it uses the hash as a cache key. If the script output already exists, the method immediately returns the cached output and no further processing is required.

var hashString = context.Request.Params["hash"];
if (!String.IsNullOrWhiteSpace(hashString)) {
	var result = cache[HttpUtility.UrlDecode(hashString)] as string;
	if (!string.IsNullOrWhiteSpace(result)) {

If the output wasn’t already cached, the method pulls the List<ScriptInfo> for the current type out of Session. It then obtains the distinct scripts from the list using the ScriptInfoComparer and reorders them based on whether or not they were marked as site-wide. Site-wide scripts (like jQuery) need to be loaded first, so that other script can take advantage of their methods. At this point, the contents of each file are added to a single string. This string will become the combined and compressed single script that’s returned by the handler.

var scripts = context.Session["__rsm__" + scriptType.ToString()] as IEnumerable;
context.Session["__rsm__" + scriptType.ToString()] = null;
if (scripts == null) return;
var scriptbody = new StringBuilder();

scripts = scripts.Distinct(new ScriptInfoComparer());

//add sitewide scripts FIRST, so they're accessible to local scripts
var siteScripts = scripts.Where(s => s.SiteWide);
var localScripts = scripts.Where(s => !s.SiteWide).Except(siteScripts, new ScriptInfoComparer());
var scriptPaths = siteScripts.Concat(localScripts).Select(s => s.LocalPath);
var minify = bool.Parse(ConfigurationManager.AppSettings["CompressScripts"]);

foreach (var script in scriptPaths) {
	if (!String.IsNullOrWhiteSpace(script)) {
		using (var file = new System.IO.StreamReader(script)) {
			var fileContent = file.ReadToEnd();
			if (scriptType == ScriptType.Stylesheet) {
				var fromUri = new Uri(context.Server.MapPath("~/"));
				var toUri = new Uri(new FileInfo(script).DirectoryName);
				fileContent = fileContent.Replace("url(", "url(/" + fromUri.MakeRelativeUri(toUri).ToString() + "/");
			if (!minify) scriptbody.AppendLine(String.Format("/* {0} */", script));
string scriptOutput = scriptbody.ToString();

If CompressScripts is set to true in the web.config, run the appropriate minifier for the current script type. Side note: there’s some interesting asymmetry within the YUI Compressor: for JavaScript the compress method is an instance method, while for CSS the compress method is a static method.

string scriptOutput = scriptbody.ToString();
if (minify) {
	switch (scriptType) {
		case ScriptType.JavaScript:
			var jscompressor = new Yahoo.Yui.Compressor.JavaScriptCompressor(scriptOutput);
			scriptOutput = jscompressor.Compress();
		case ScriptType.Stylesheet:
			scriptOutput = Yahoo.Yui.Compressor.CssCompressor.Compress(scriptOutput);

Finally, save the output to the cache (for next time!) and send the output as the reponse.

var hash = GetHash(scripts);
cache[hash] = scriptOutput;


Two appSettings are added to the web.config: UseCDNScripts and CompressScripts. The first determines whether or not the Output Razor helpers write out tags for the CDN paths. The second determines whether or not the <code>HttpHandler</code> compresses the combined output before returning the response.

  <add key="UseCDNScripts" value="false" />
  <add key="CompressScripts" value="false" />

The HttpHandler is also registered in the web.config. One version for IIS6, another for IIS7.

    <add verb="*" path="scripts.axd" type="RazorScriptManager.RazorScriptManager"/>
    <add name="ScriptManager" verb="*" path="scripts.axd" type="RazorScriptManager.RazorScriptManager"/>

Using RazorScriptManager

20 Jun

Recently I was working on a personal project in ASP.NET MVC3 and realized I didn’t have a good way to manage CSS and JavaScript files. Most of the script managers available were designed for WebForms, and while they may work fine in MVC, I felt dirty trying to include a user control. I wanted the standard script manager functionality – combining/compressing scripts and caching of the combined/compressed output. I also wanted something that played nicely with Razor, and I wanted it to be able to take CDN-hosted scripts into account as well. And I wanted something I could drop in with NuGet.

I spent a little bit of time looking for something that met all those needs, but (after an admittedly short search) never found what I was looking for. Eventually I decided it might be faster to just scratch my own itch, and definitely more educational.

One of my design goals was to have the API be as simple as possible. To achieve this, I used optional parameters to allow the user to make full use of named parameters in order to avoid a pile of method overloads. To add a JavaScript file to the script manager, you simply call:

@Script.AddJavaScript(localPath: "~/Scripts/jquery-1.6.1.js", cdnPath: "", siteWide: true)

Then to write out the combined/compressed JavaScript reference, just call this on your layout page:


Calling AddJavascript() will add the file reference to the collection of scripts to be managed. Based on a web.config setting, the script manager will use either the local path or the optional CDN path, if it exists. The siteWide parameter ensures that the script is called prior to other scripts. In this example, I’m referencing jQuery, so I want to make sure jQuery is loaded before a page-specific script that depends on jQuery. For the page-specific script, I’d call something like:


Because the default values for the parameters are set to the most-common use case, the typical script reference is about as simple as it can get. Two web.config settings are also used. Setting UseCDNScripts to false will tell the manager to only use local files, even if CDN paths are provided (useful during development). Setting CompressScripts to false will tell the manager not to compress scripts. This is also useful during development, because debugging a compressed script is a total nightmare.

The output for the two files is provided through an HttpHandler. If you’re looking in your browser’s development tools, you’ll notice calls to /scripts.axd. This is the HttpHandler, and the two querystring values passed in tell the script manager which type of output you want (CSS/JS) and the hash value of the combined scripts. The handler then returns the output as if it were a single file. Within the returned output (if compression is disabled), each individual file is preceded by a comment providing the full file path so you can easily track down the original files.

For a simple example project, check out the demo application on GitHub. Or just install the NuGet package and check it out – it’s only two files and a pair of web.config settings.  If you’re only interested in how to use it, you can stop here. If you want to know how it works, check back in the next day or so and I’ll the next post where I cover the internals of the project. Until then, feel free to look at the source code on GitHub.

TV Without Cable

18 Jun

Almost a year ago, we cut the cord (well, dish) and cancelled DirecTV. Over time, I’ve modified, gutted and tweaked our setup, and I’m pretty happy with what we have now. However, most of what I’m about to describe would work perfectly fine (if not better) with normal cable or satellite service, so don’t freak out if installing an antenna isn’t for you.

Our current setup consists of an HDTV antenna, a single PC and an Xbox 360 for each TV. It sounds expensive, but it’s really not—especially when you factor in the monthly savings of not having a cable bill. We’re saving $80-90 a month, so it doesn’t take long to pay for a $200 Xbox.

The Antenna

If you’re going to try receiving your local channels over antenna, I highly recommend using the antenna selector on The tool will give you a detailed list of nearby stations based on your exact location – including distance and compass heading. Using this info, you can choose an antenna with the appropriate range. Personally, my furthest station is about 28miles, so I have an antenna rated for 30 miles. If you need more range, you can get the same antenna in a 55-mile or 70-mile configuration.

The Computer

I use my regular desktop computer as my media center PC. The specs are nothing special—Core 2 Duo CPU, 3GB RAM, 1TB hard drive—but it’s worked without a hitch. You’ll want to be on Ethernet if possible, but I’ve had good success with 802.11n wireless. The only modification I had to make was adding a new TV tuner card. If you’re going to use satellite or digital cable, you’ll want to make sure you find a card that will specifically work with those, but for an HDTV antenna, pretty much any ATSC card will work.

I already had Windows 7 installed on my machine, which includes Windows Media Center. This is key, because it will control your TV tuner and stream live or recorded TV to the Xboxes. As a side bonus, most TV tuner cards include an FM tuner, which Windows Media Center can control too—including DVR-like functionality.

Media Center also supports plugins, so extra sources and tools can be added on your computer (and therefore available on the Xboxes). Some of the more popular ones include Amazon VOD, Heatwave (weather), Photato (Facebook photo albums) and Macro Tube (access to video sites, including YouTube).

The other consideration you’ll need to make is hard drive space. If you’re using Windows Media Center to record a lot of shows, or you’ll be ripping lots of DVDs, you’ll need plenty of space. I’ve found that a feature-length movie, ripped at a “good enough” resolution, can range from 3-7GB each, so I’d recommend having at least a terabyte available.

The Xboxes

Other than various other gaming consoles, each TV only has a single Xbox hooked up to it. Thanks to recent updates, the Xbox can stream Netflix, ESPN3, Hulu Plus, Last.FM, and rented/purchased videos from Zune. The Xbox also works great as a Media Center Extender, so it can stream live and recorded TV from your computer’s tuner card in addition to playing just about any video file on your network. The ability to do all this from a single device was key—the wife-acceptance-factor of cable cutting has skyrocketed now that she can pick any channel/movie/show without having to worry about video inputs and audio settings.

Oh, and it plays games too.

There are multiple Xbox 360 models available, but the primary difference is storage capacity. Because all video in this scenario is streamed, we get by just fine with the Xbox 360 Arcade model that only has 4GB of onboard memory. The newest models are super quiet and come with built-in 802.11n, which is fast enough to stream HD video. One thing to keep in mind, however, is that each Xbox will need its own Xbox live account.

Bonus: The iDevices

This part isn’t a required part of the setup, but it’s too awesome not to mention. On the media center PC I have an application called Air Video, and its companion player application is installed on our iPhones. The app allows you to navigate and stream (with on-the-fly conversion) videos on your computer or local network. The application works over Wi-Fi and 3G, and is absolutely worth the $2.99 they ask for the full version.

Our setup is simple, but that’s what makes it so great. Apps that are already tightly integrated with the Xbox provide a great user experience, while Windows Media Center provides a huge amount of extensibility. More importantly, our basic TV-watching needs are mostly covered:

  • Hulu Plus: Network and basic cable shows, some movies
  • Netflix: Movies, some shows
  • Amazon VOD: Movies, shows
  • ESPN3: College football, MLB, NBA, tennis and more (3500+ live events a year)
  • Antenna: Local news, more sports (NFL), severe weather coverage

Some content is still only available to watch on computers, so we fill those gaps with our laptops, but my personal opinion is that if you get upset because you can’t watch Real Housewives of New Jersey live, then just go read a book.

The Future

Last week at E3, Microsoft announced that they were bringing live and on-demand television streaming to Xbox Live this fall. They haven’t announced content partners yet, but they claim that the service will even include local channels, so my antenna might end up on Craigslist before the end of the year.

Using the Range Attribute With Decimal or DateTime

15 Jun

Learned something cool about System.ComponentModel.DataAnnotations.RangeAttribute today – it can actually be used with any type. It only includes numeric constructors for Int32 and Double, which are probably the two most common uses, but it also includes a constructor that takes a type and two string as parameters:

public RangeAttribute(
  Type type,
  string minimum,
  string maximum

The one caveat is that the type must implement the IComparable interface. Typically you wouldn’t be using the Range attribute to decorate a custom type, but as long as your type implemented IComparable, you’d be just fine. However, in today’s scenario we were just trying to validate a Decimal value in a model in an MVC2 application. Our solution looked something like this:

[Range(typeof(Decimal),"0", "9999999")]
public decimal Rate { get; set; }

The downside is that the minimum and maximum values have to be passed as strings, but since they have to be constants anyway (to work with the attribute). that’s not much of a setback. This can also easily be used with dates as well – just be sure to write the date in a usable format.

Pushing Your First NuGet Package

13 Jun

I absolutely love NuGet. Having a package management system in place makes it so much easier not only to use shared code, but it also makes it easier to share. For instance, say you had a class or assembly that you think other people might find useful. The old way to do it was to pick one of the many project-hosting sites, build a deployable assembly, upload the assembly, and hope people found it. And even if someone did find it, you were either limited in how you could integrate with their project or you had to include long, drawn out integration instructions. But no longer!

With NuGet, you have the ability to control your shared code’s integration with a project. You can add files to the directory tree, transform the web.config, or even run a PowerShell script. The PowerShell script can even be used to add commands to the Package Console, so your shared code can include tooling along with it. But for your first foray into the world of NuGet, you’ll probably want to start as simply as possible. So that’s the scenario I’ll cover here—packaging a single file.

If you’re following along and would like an example, the current source for my actual first NuGet package is on GitHub:

Step 1: Write Your Code

You’re on your own here.

Step 2: Build Your Package Folder Structure

Create a folder for your package. Inside that folder, create another folder named ‘content’. Anything that goes inside content will be placed in the root of the project folder, so inside that content folder you can simulate the folder structure for any files you want to include in the project. In this instance, I have a single class I’d like to include in the Models folder, so I created the folder structure [package root]/content/Models and placed my file there.


Step 3: Build Your NuSpec File

The glue that holds it all together is the NuSpec file. This XML file contains all the package information that NuGet needs in order to identify, share and deploy your package. For this step, you’ll need to install nuget.exe. If you have NuGet installed, you probably already have it. Run nuget in the command line to see if you have it, and if you do run nuget update to make sure you have the latest version.

Once you have the NuGet executable straightened up, navigate to your package’s root folder in the command line. Run nuget [projectname] to create the .nuspec file, and open it up in a text editor. For this package, only a minimal amount of information is needed:

<?xml version="1.0"?>
<package xmlns="">
    <authors>Dave Cowart</authors>
    <owners>Dave Cowart</owners>
    <description>Adds an implementation of PagedList that uses AutoMapper to emit ViewModels</description>
    <tags>PagedList, AutoMapper, ASP.NET</tags>
      <dependency id="AutoMapper" version="1.0" />

In this case, the package has a dependency on another package—AutoMapper. The id used in the dependency element is the same as the name used in AutoMapper’s NuSpec file. That id is used as the name pretty much throughout NuGet, so it’s easy to find if you need it. Just make sure that your chosen id is unique, descriptive, and free of any crazy special characters.

Step 4: Package It Up

Head back to the command line and run the command nuget pack [packagename].nuspec. This will create the .nupkg file you’ll need to upload to NuGet.

Step 5: Upload to NuGet

Before you actually upload to the NuGet library, you’ll need to register and get your API key. Register for an account at http://nuget.organd go to My Account. Copy the access key and head back to the command line once again. Run nuget setApiKey [apikey] to set your API key (you’ll only need to do this one time, even if you’re creating multiple packages).

Once your API key is set, run nuget push [packagename].nupkg. This will upload your package to the library, and it’ll be available in just a couple minutes. To check on it, go back to and go to Contribute > Manage My Packages. From this page, you’ll get a list of all the packages you’ve shared, along with the total number of reviews and downloads.

And that’s it! The first one can be the hardest, mostly because it’s unfamiliar and requires setting up an account and your API key. But once all that’s in place, it’s simple to create additional packages, and even simpler to push updates to an existing package (just make sure to increment your version number in the NuSpec file). Be sure to check out the source for AutoMapperPagedList if you have any questions, and if you feel like giving it a try in your projects, let me know!

P.S. – For a package that includes a web.config transform, you can check out the source for RazorScriptManager, another package I created that I’ll be blogging about soon.

Named Sections in Razor

10 Jun

This past week I had to build out a mostly-HTML site in MVC3. Since there wasn’t anything challenging on the backend, I decided to go all out and see how DRY I could make my view code in Razor, and to see if I ran into anything I couldn’t do that I was able to do with the WebForms view engine. The first thing that I ran into (that I didn’t know how to do) was to reproduce the same functionality as ContentPlaceHolder. Fortunately, Named Sections fit the bill perfectly.

Named Sections allow you to specify extra areas in your layout file by calling RenderSection(). These areas have a name (obviously) and can be marked as required or optional. In your view, you simply wrap the view code for a section inside Razor tags, like this:


<p>Page Content</p>

@section footer {
  <div>Footer content</div>

In a typical _Layout.cshtml file, you’ll have the basic HTML structure of your site and the Razor tag @RenderBody(). And in a View that uses this layout, you’ll have your page content in the root of the document. What I didn’t understand was that MVC is essentially treating @RenderBody() as @RenderSection(“body”) and wrapping the primary (un-nested) content of your View file in @section body { }. The idea was to make the 90% use-case scenario as easy as possible, and they definitely accomplished that goal.

But what about default content in a section? I don’t want to have to specify the same footer code on every single view in my site, just so I can override it on one page. The easy way is to call IsSectionDefined() in the layout to see if the view contains the section, but that requires wrapping a (potentially large) section of view code in an if statement.

However, it is possible to extend RenderSection to take a default content parameter. Fortunately Phil Haack (who knows a little bit about how MVC3 works) covered this in a blog about layout sections, so I don’t have to. Put simply, it’s possible to write an extension method that takes a Razor block of HTML. It’s a little convoluted, but in the right scenario it can work great.

For me, Named Sections have turned out to be incredibly useful, and will definitely save me plenty of time in the future.