Archive | C Sharp RSS feed for this section

Inside RazorScriptManager

22 Jun

When you add the RazorScriptManager NuGet package to your project, three things happen: Script.cshtml is added to /App_Code, RazorScriptManager.cs is added to /Handlers, and a web.config transform is performed. The first file provides the Razor helper methods used in your views. The second provides the HttpHandler that handles the combining/compression and responds to requests for scripts.axd. The web.config transform adds a couple settings and registers the HttpHandler.


There are four Razor helpers added as part of the NuGet package. Two for adding script references to the response, and two for writing out script tags for the response. One of each type is provided for CSS and JavaScript.

Inside the two Add methods (AddCss() and AddJavaScript()), a new ScriptInfo object is created for the referenced script. That ScriptInfo object contains the script type, local path, CDN path, and whether the script is used site-wide. The ScriptInfo object is then added to a List<ScriptInfo> that’s kept in Session.

@helper AddJavaScript(string localPath, string cdnPath = null, bool siteWide = false) {
	var scriptType = ScriptType.JavaScript;
	//create a session key specifically for javascript ScriptInfo objects
	var key = "__rsm__" + scriptType.ToString();
	//If the List doesn't exist, create it
	if (Session[key] == null) {
		Session[key] = new List();
	//pull out the current (or new) list - it may already have other ScriptInfo objects
	var scripts = Session[key] as List;
	//add the current ScriptInfo
	scripts.Add(new ScriptInfo(Server.MapPath(localPath), cdnPath, scriptType, siteWide));
	//put the list back in Session
	Session[key] = scripts;

In the Output methods (OutputCss() and OutputJavaScript()), the List<ScriptInfo> is extracted from Session. Based on web.config settings, a list of CDN-hosted scripts may be extracted. The helper then writes out <script> or <link> tags for CDN-hosted scripts (if any) and for the HttpHandler path. An MD5 hash is generated from the filenames of the referenced local scripts and appended to the HttpHandler path. This MD5 is used to cache the combined/compressed output in the HttpHandler, and will be explained more in that section.

@helper OutputJavaScript() {
	var scriptType = ScriptType.JavaScript;
	//create a session key specifically for javascript ScriptInfo objects
	var key = "__rsm__" + scriptType.ToString();
	//if no scripts have been added, don't do anything
	if (Session[key] == null) return;
	//pull out the current list from Session
	var scripts = Session[key] as List;
	var cdnScripts = new List();
	//if the web.config says to use CDN-hosted scripts, extract then from the list into cdnScripts
	if (bool.Parse(System.Configuration.ConfigurationManager.AppSettings["UseCDNScripts"])) {
		//get all scripts without a CDN path
		var localScripts = scripts.Where(s => string.IsNullOrWhiteSpace(s.CDNPath)).ToList().ToList();
		//get all scripts that aren't local-only scripts
		cdnScripts = scripts.Except(localScripts).ToList();
		//put the local scripts back into session (CDN scripts are handled here, not in the HttpHandler)
		Session[key] = localScripts;

	//write out the CDN scripts to the response
	foreach (var cdnScript in cdnScripts) {
<script type="text/javascript" src="@cdnScript.CDNPath"></script>}

	//generate a unique hash based on the filenames
	var hash = HttpUtility.UrlEncode(RazorScriptManager.GetHash(scripts));
	//write out a script tag for the HttpHandler using the script type and hash<script type="text/javascript" src="/scripts.axd?type=@scriptType.ToString()&hash=@hash"></script>


The class file for the HttpHandler also contains the definitions for ScriptInfo, ScriptType and ScriptInfoComparer. These classes represent a script reference, the type of script, and a way of comparing two scripts. The comparer is used later to eliminate duplicate script references (e.g. if you have a reference to the same jQuery file on your Layout and a partial view, it will only use one). The RazorScriptManager class itself (which is the actual HttpHandler) provides an instance method for responding to requests (ProcessRequest()) and a static method for generating a hash (GetHash()). GetHash() works by appending the distinct list of script paths into a single string, then generating a standard MD5 hash of that string.

public static string GetHash(IEnumerable scripts) {
	var input = string.Join(string.Empty, scripts.Select(s => s.LocalPath).Distinct());
	var hash = System.Security.Cryptography.MD5.Create().ComputeHash(Encoding.ASCII.GetBytes(input));
	var sb = new StringBuilder();
	for (int i = 0; i < hash.Length; i++)
	return sb.ToString();

ProcessRequest() is a little more involved. First, it determines the type of script being requested from the querystring value. Based on the type, it sets the response’s content type appropriately.

var scriptType = (ScriptType)Enum.Parse(typeof(ScriptType), context.Request.Params["type"]);

switch (scriptType) {
	case ScriptType.JavaScript:
		context.Response.ContentType = @"application/javascript";
	case ScriptType.Stylesheet:
		context.Response.ContentType = @"text/css";

After setting content type, the method checks the application cache to see if a combined/compress script has already been generated for that particular set of files. To do this, it uses the hash as a cache key. If the script output already exists, the method immediately returns the cached output and no further processing is required.

var hashString = context.Request.Params["hash"];
if (!String.IsNullOrWhiteSpace(hashString)) {
	var result = cache[HttpUtility.UrlDecode(hashString)] as string;
	if (!string.IsNullOrWhiteSpace(result)) {

If the output wasn’t already cached, the method pulls the List<ScriptInfo> for the current type out of Session. It then obtains the distinct scripts from the list using the ScriptInfoComparer and reorders them based on whether or not they were marked as site-wide. Site-wide scripts (like jQuery) need to be loaded first, so that other script can take advantage of their methods. At this point, the contents of each file are added to a single string. This string will become the combined and compressed single script that’s returned by the handler.

var scripts = context.Session["__rsm__" + scriptType.ToString()] as IEnumerable;
context.Session["__rsm__" + scriptType.ToString()] = null;
if (scripts == null) return;
var scriptbody = new StringBuilder();

scripts = scripts.Distinct(new ScriptInfoComparer());

//add sitewide scripts FIRST, so they're accessible to local scripts
var siteScripts = scripts.Where(s => s.SiteWide);
var localScripts = scripts.Where(s => !s.SiteWide).Except(siteScripts, new ScriptInfoComparer());
var scriptPaths = siteScripts.Concat(localScripts).Select(s => s.LocalPath);
var minify = bool.Parse(ConfigurationManager.AppSettings["CompressScripts"]);

foreach (var script in scriptPaths) {
	if (!String.IsNullOrWhiteSpace(script)) {
		using (var file = new System.IO.StreamReader(script)) {
			var fileContent = file.ReadToEnd();
			if (scriptType == ScriptType.Stylesheet) {
				var fromUri = new Uri(context.Server.MapPath("~/"));
				var toUri = new Uri(new FileInfo(script).DirectoryName);
				fileContent = fileContent.Replace("url(", "url(/" + fromUri.MakeRelativeUri(toUri).ToString() + "/");
			if (!minify) scriptbody.AppendLine(String.Format("/* {0} */", script));
string scriptOutput = scriptbody.ToString();

If CompressScripts is set to true in the web.config, run the appropriate minifier for the current script type. Side note: there’s some interesting asymmetry within the YUI Compressor: for JavaScript the compress method is an instance method, while for CSS the compress method is a static method.

string scriptOutput = scriptbody.ToString();
if (minify) {
	switch (scriptType) {
		case ScriptType.JavaScript:
			var jscompressor = new Yahoo.Yui.Compressor.JavaScriptCompressor(scriptOutput);
			scriptOutput = jscompressor.Compress();
		case ScriptType.Stylesheet:
			scriptOutput = Yahoo.Yui.Compressor.CssCompressor.Compress(scriptOutput);

Finally, save the output to the cache (for next time!) and send the output as the reponse.

var hash = GetHash(scripts);
cache[hash] = scriptOutput;


Two appSettings are added to the web.config: UseCDNScripts and CompressScripts. The first determines whether or not the Output Razor helpers write out tags for the CDN paths. The second determines whether or not the <code>HttpHandler</code> compresses the combined output before returning the response.

  <add key="UseCDNScripts" value="false" />
  <add key="CompressScripts" value="false" />

The HttpHandler is also registered in the web.config. One version for IIS6, another for IIS7.

    <add verb="*" path="scripts.axd" type="RazorScriptManager.RazorScriptManager"/>
    <add name="ScriptManager" verb="*" path="scripts.axd" type="RazorScriptManager.RazorScriptManager"/>

Using RazorScriptManager

20 Jun

Recently I was working on a personal project in ASP.NET MVC3 and realized I didn’t have a good way to manage CSS and JavaScript files. Most of the script managers available were designed for WebForms, and while they may work fine in MVC, I felt dirty trying to include a user control. I wanted the standard script manager functionality – combining/compressing scripts and caching of the combined/compressed output. I also wanted something that played nicely with Razor, and I wanted it to be able to take CDN-hosted scripts into account as well. And I wanted something I could drop in with NuGet.

I spent a little bit of time looking for something that met all those needs, but (after an admittedly short search) never found what I was looking for. Eventually I decided it might be faster to just scratch my own itch, and definitely more educational.

One of my design goals was to have the API be as simple as possible. To achieve this, I used optional parameters to allow the user to make full use of named parameters in order to avoid a pile of method overloads. To add a JavaScript file to the script manager, you simply call:

@Script.AddJavaScript(localPath: "~/Scripts/jquery-1.6.1.js", cdnPath: "", siteWide: true)

Then to write out the combined/compressed JavaScript reference, just call this on your layout page:


Calling AddJavascript() will add the file reference to the collection of scripts to be managed. Based on a web.config setting, the script manager will use either the local path or the optional CDN path, if it exists. The siteWide parameter ensures that the script is called prior to other scripts. In this example, I’m referencing jQuery, so I want to make sure jQuery is loaded before a page-specific script that depends on jQuery. For the page-specific script, I’d call something like:


Because the default values for the parameters are set to the most-common use case, the typical script reference is about as simple as it can get. Two web.config settings are also used. Setting UseCDNScripts to false will tell the manager to only use local files, even if CDN paths are provided (useful during development). Setting CompressScripts to false will tell the manager not to compress scripts. This is also useful during development, because debugging a compressed script is a total nightmare.

The output for the two files is provided through an HttpHandler. If you’re looking in your browser’s development tools, you’ll notice calls to /scripts.axd. This is the HttpHandler, and the two querystring values passed in tell the script manager which type of output you want (CSS/JS) and the hash value of the combined scripts. The handler then returns the output as if it were a single file. Within the returned output (if compression is disabled), each individual file is preceded by a comment providing the full file path so you can easily track down the original files.

For a simple example project, check out the demo application on GitHub. Or just install the NuGet package and check it out – it’s only two files and a pair of web.config settings.  If you’re only interested in how to use it, you can stop here. If you want to know how it works, check back in the next day or so and I’ll the next post where I cover the internals of the project. Until then, feel free to look at the source code on GitHub.

Using the Range Attribute With Decimal or DateTime

15 Jun

Learned something cool about System.ComponentModel.DataAnnotations.RangeAttribute today – it can actually be used with any type. It only includes numeric constructors for Int32 and Double, which are probably the two most common uses, but it also includes a constructor that takes a type and two string as parameters:

public RangeAttribute(
  Type type,
  string minimum,
  string maximum

The one caveat is that the type must implement the IComparable interface. Typically you wouldn’t be using the Range attribute to decorate a custom type, but as long as your type implemented IComparable, you’d be just fine. However, in today’s scenario we were just trying to validate a Decimal value in a model in an MVC2 application. Our solution looked something like this:

[Range(typeof(Decimal),"0", "9999999")]
public decimal Rate { get; set; }

The downside is that the minimum and maximum values have to be passed as strings, but since they have to be constants anyway (to work with the attribute). that’s not much of a setback. This can also easily be used with dates as well – just be sure to write the date in a usable format.

Pushing Your First NuGet Package

13 Jun

I absolutely love NuGet. Having a package management system in place makes it so much easier not only to use shared code, but it also makes it easier to share. For instance, say you had a class or assembly that you think other people might find useful. The old way to do it was to pick one of the many project-hosting sites, build a deployable assembly, upload the assembly, and hope people found it. And even if someone did find it, you were either limited in how you could integrate with their project or you had to include long, drawn out integration instructions. But no longer!

With NuGet, you have the ability to control your shared code’s integration with a project. You can add files to the directory tree, transform the web.config, or even run a PowerShell script. The PowerShell script can even be used to add commands to the Package Console, so your shared code can include tooling along with it. But for your first foray into the world of NuGet, you’ll probably want to start as simply as possible. So that’s the scenario I’ll cover here—packaging a single file.

If you’re following along and would like an example, the current source for my actual first NuGet package is on GitHub:

Step 1: Write Your Code

You’re on your own here.

Step 2: Build Your Package Folder Structure

Create a folder for your package. Inside that folder, create another folder named ‘content’. Anything that goes inside content will be placed in the root of the project folder, so inside that content folder you can simulate the folder structure for any files you want to include in the project. In this instance, I have a single class I’d like to include in the Models folder, so I created the folder structure [package root]/content/Models and placed my file there.


Step 3: Build Your NuSpec File

The glue that holds it all together is the NuSpec file. This XML file contains all the package information that NuGet needs in order to identify, share and deploy your package. For this step, you’ll need to install nuget.exe. If you have NuGet installed, you probably already have it. Run nuget in the command line to see if you have it, and if you do run nuget update to make sure you have the latest version.

Once you have the NuGet executable straightened up, navigate to your package’s root folder in the command line. Run nuget [projectname] to create the .nuspec file, and open it up in a text editor. For this package, only a minimal amount of information is needed:

<?xml version="1.0"?>
<package xmlns="">
    <authors>Dave Cowart</authors>
    <owners>Dave Cowart</owners>
    <description>Adds an implementation of PagedList that uses AutoMapper to emit ViewModels</description>
    <tags>PagedList, AutoMapper, ASP.NET</tags>
      <dependency id="AutoMapper" version="1.0" />

In this case, the package has a dependency on another package—AutoMapper. The id used in the dependency element is the same as the name used in AutoMapper’s NuSpec file. That id is used as the name pretty much throughout NuGet, so it’s easy to find if you need it. Just make sure that your chosen id is unique, descriptive, and free of any crazy special characters.

Step 4: Package It Up

Head back to the command line and run the command nuget pack [packagename].nuspec. This will create the .nupkg file you’ll need to upload to NuGet.

Step 5: Upload to NuGet

Before you actually upload to the NuGet library, you’ll need to register and get your API key. Register for an account at http://nuget.organd go to My Account. Copy the access key and head back to the command line once again. Run nuget setApiKey [apikey] to set your API key (you’ll only need to do this one time, even if you’re creating multiple packages).

Once your API key is set, run nuget push [packagename].nupkg. This will upload your package to the library, and it’ll be available in just a couple minutes. To check on it, go back to and go to Contribute > Manage My Packages. From this page, you’ll get a list of all the packages you’ve shared, along with the total number of reviews and downloads.

And that’s it! The first one can be the hardest, mostly because it’s unfamiliar and requires setting up an account and your API key. But once all that’s in place, it’s simple to create additional packages, and even simpler to push updates to an existing package (just make sure to increment your version number in the NuSpec file). Be sure to check out the source for AutoMapperPagedList if you have any questions, and if you feel like giving it a try in your projects, let me know!

P.S. – For a package that includes a web.config transform, you can check out the source for RazorScriptManager, another package I created that I’ll be blogging about soon.

Named Sections in Razor

10 Jun

This past week I had to build out a mostly-HTML site in MVC3. Since there wasn’t anything challenging on the backend, I decided to go all out and see how DRY I could make my view code in Razor, and to see if I ran into anything I couldn’t do that I was able to do with the WebForms view engine. The first thing that I ran into (that I didn’t know how to do) was to reproduce the same functionality as ContentPlaceHolder. Fortunately, Named Sections fit the bill perfectly.

Named Sections allow you to specify extra areas in your layout file by calling RenderSection(). These areas have a name (obviously) and can be marked as required or optional. In your view, you simply wrap the view code for a section inside Razor tags, like this:


<p>Page Content</p>

@section footer {
  <div>Footer content</div>

In a typical _Layout.cshtml file, you’ll have the basic HTML structure of your site and the Razor tag @RenderBody(). And in a View that uses this layout, you’ll have your page content in the root of the document. What I didn’t understand was that MVC is essentially treating @RenderBody() as @RenderSection(“body”) and wrapping the primary (un-nested) content of your View file in @section body { }. The idea was to make the 90% use-case scenario as easy as possible, and they definitely accomplished that goal.

But what about default content in a section? I don’t want to have to specify the same footer code on every single view in my site, just so I can override it on one page. The easy way is to call IsSectionDefined() in the layout to see if the view contains the section, but that requires wrapping a (potentially large) section of view code in an if statement.

However, it is possible to extend RenderSection to take a default content parameter. Fortunately Phil Haack (who knows a little bit about how MVC3 works) covered this in a blog about layout sections, so I don’t have to. Put simply, it’s possible to write an extension method that takes a Razor block of HTML. It’s a little convoluted, but in the right scenario it can work great.

For me, Named Sections have turned out to be incredibly useful, and will definitely save me plenty of time in the future.


9 Jun

If you’re just looking for an example of how to use AutoMapperPagedList, drop down to the bottom and look at the last code sample.

A couple months ago (while at MIX 11, actually) I was playing with some of the new MVC3 Tooling, including EF4.1. To play with it, I was trying to build a simple blog, and for no real reason at all decided to use both AutoMapper and Scott Guthrie’s PagedList. For those unfamiliar with either, AutoMapper is a great way to transform your data access objects into view models, and PagedList makes it really simple to hook up an IQueryable to server-side paging.

First, an explanation of AutoMapper. In creating AutoMapper, Jimmy Bogard provided a super-useful way to automatically map one object to another type. The mapping is primarily based on conventions, so if you name the properties on the classes properly, you only have to do minimal configuration. For more details, see the examples on AutoMapper’s CodePlex site. For my simple blog, I wanted to use AutoMapper to flatten my Post model into a PostViewModel. Because PostViewModel was set up to easily work with AutoMapper, the only configuration I had to do was make sure that this line was called in my Global.asax:

AutoMapper.Mapper.CreateMap<Post, PostViewModel>();

Then when I want to convert a Post to a PostViewModel, I can just call this:

AutoMapper.Mapper.Map<Post, PostViewModel>(post);

Next, an explanation of PagedList. The Gu created a helpful class and extension methods for working with server-side paging. The extension method extends an IQueryable<T> and converts it to a PagedList<T>. PagedList<T> inherits from List<T>, so it has all the normal features of a List<T>, but it adds the interface IPagedList:

public interface IPagedList {
	int TotalCount { get; set; }
	int PageIndex { get; set; }
	int PageSize { get; set; }
	bool IsPreviousPage { get; }
	bool IsNextPage { get; }

The properties added by IPagedList can be used in your view to easily control Previous/Next buttons. Additionally, the constructor of PagedList calls Skip() and Take() on the IQueryable before calling ToList() and adding the resultant items to itself, meaning that the execution of the IQueryable happens inside of PagedList. This is important because AutoMapper won’t work as part of deferred execution, which is what is going on with the IQueryable. This means that the trying to use AutoMapper to map an IQueryable<Post> to an IQueryable<PostViewModel> just can’t happen.

So since I can’t use AutoMapper to send objects to PagedList, I had to use AutoMapper inside PagedList, after the IQueryable was executed and returned as a List. However, I didn’t want to muck around too much with the regular PagedList class, so I created a new class that inherits from it instead. This new class is called (quite creatively) MappedPagedList. It works the exact same way as PagedList, but takes one additional generic type and one additional parameter. It requires two generic types because AutoMapper needs a source type and destination type. The extra parameter, however, is the key. As parameter types go, it’s a doozy:

public static MappedPagedList<TSource, TOutput> ToPagedList<TSource, TOutput>(this IQueryable<TSource> source, int index, Func<IEnumerable<TSource>, IEnumerable<TOutput>> mapper, int pageSize = 10) {
	return new MappedPagedList<TSource, TOutput>(source, index, pageSize, mapper);

Let me break it down a little.
Func<IEnumerable<TSource>, IEnumerable<TOutput>>
looks intimidating, but it’s really just a delegate for a call to AutoMapper.Map<TSource, TOutput>(). In simpler terms, the parameter takes a method that has one parameter of type IEnumerable<TSource> and returns IEnumerable<TOutput>. The best way to understand it is probably to see an example of the method being called:

IQueryable<Post> posts = context.Posts.OrderByDescending(p => p.Timestamp).AsQueryable();
<PostViewModel> pagedList = posts.ToPagedList<Post, PostViewModel>(2, Mapper.Map<IEnumerable<Post>, IEnumerable<PostViewModel>>, 10);
return View(pagedList);

Because MappedPagedList<T,O> inherits from PagedList<T> (and PagedList<T> is the only part that we actually need in the view), the View only needs to be passed an instance of Paged<T>. The magic happens inside the constructor of MappedPagedList, where it uses the passed-in Func to map the List<T> to a List<O>.

If you’ve read this far, you’re probably getting desperate for the part where I say “…and here’s how you use it in your site”, so here you go.

public ViewResult Index(int page = 0) {
	int pageSize = 10;
	IQueryable<Post> posts = context.Posts.OrderByDescending(p => p.Timestamp).AsQueryable();
	PagedList<PostViewModel> pagedList = posts.ToPagedList<Post, PostViewModel>(page, Mapper.Map<IEnumerable<Post>, IEnumerable<PostViewModel>>, pageSize);
	return View(pagedList);

@if (Model.IsPreviousPage) {
	@Html.ActionLink("Previous", "Index", new { page = Model.PageIndex - 1 })
@(Model.PageIndex + 1) of @(Model.TotalCount / Model.PageSize)
@if (Model.IsNextPage) {
	@Html.ActionLink("Next", "Index", new { page = Model.PageIndex + 1 })

This will generate a previous button, a page indicator, a next button. The previous and next button are hidden if there isn’t a previous or next page. The controller takes page as an optional parameter, and passes that to the MappedPagedList to use in Skip().

So why would you want to use it? If you need to use AutoMapper to send view models to your view, but want super-easy paging support, this single file will save you a bunch of time and effort. I’ve put it on NuGet at, so feel free to pull it down and give it a try. It’s a single file, and the source is hosted on GitHub at, so feel free to clone it, fork it, or do whatever. I also have an example MVC project on GitHub at if you need a full-site example.

Named Routes in ASP.NET MVC 3

8 Jun

I’m a recent convert to named routes. Using a named route to create a link or URL gives a you level of explicitness that’s comforting in most situations and a lifesaver in others. In their simplest form, though, named routes lead to DRYer routing and can save you some serious maintenance headaches.

All routes are effectively named routes. Using a named route means to specify the route by name instead of allowing the routing engine to determine which one to use. This means that a route that you intend to reference by name is created the same as any other route:

	name: "Post",
	url: "Post/{id}",
	defaults: new { controller = "Posts", action = "Details", id = UrlParameter.Optional }

If you were to use this as a typical action link, you’d specify the link text, controller, action and any route values (in this case id) as parameters in Html.ActionLink(), like this:

@Html.ActionLink("Show", "Posts", "Details", new { id = 4 })

The downside of this technique is that if you were to change a controller name, change an action name or move an action, your link would break. And not just this link, but any ActionLink that points to that combination of controller and action. The other danger is that you have no guarantees as to which route will be pulled from the routing table. It’s easy to envision a scenario where you have to figure out which of 50 routes in the routing table is being used. With named routes, you avoid both of these problems, and as a side benefit your link code is shorter too. To use a named route link, you only need to specify the link text, the name of the route and any route values as parameters, like this:

@Html.RouteLink("Show", "Post", new { id = 4 })

This link is now protected against future change, takes up a little less space, and leaves no doubts as to which route will be utilized. The one negative is that there is now an extra step to find the action and controller used by the route, but in a sensibly-structured site it shouldn’t take a wild guess to figure it out. In my mind these benefits vastly outweigh the slight obfuscation. With named routes, you’ll always know what you get.