Silverlight 5 Animations on the Composition Thread

Being a passionate front-end developer, I am constantly on the quest for creating the smoothest and most intuitive user experience possible. That’s why I was really excited when I heard about some of the performance features and enhancements coming in Silverlight 5. One of the most exciting was the concept of a Composition thread; an idea borrowed from Windows Phone 7 that allows certain elements and animations to be offloaded to the GPU and thus independent of the main UI thread.

Anybody who has ever had to pack the visual tree chock-full of elements in Silverlight knows that performance starts to suffer. A major pet peeve of mine is when the loading indicator stops animating when the UI thread is tied up with other processing (such as adding elements to a DataGrid, and rendering the individual rows). With the Silverlight 5 beta ready to go, I figured I’d try my hand at putting that composition thread to work.

Setting Everything Up

Firstly, I’d like to briefly outline what it takes to get up and running with the Silverlight 5 beta, and then what changes are necessary to be eligible to take advantage of the composition thread.

Step 1. Download Silverlight 5 beta SDK and tools for Visual Studio 2010. Make sure you have Service Pack 1 installed first.

Step 2. If you are working with an existing solution, target Silverlight 5 in all of your Silverlight projects:

Step 3. Now that your projects are targeting Silverlight 5, it’s now time to turn on GPU Acceleration. In the html (or aspx) page that hosts your Silverlight object, make sure the enableGpuAcceleration param is set to true:

[xml]<param name="enableGpuAcceleration" value="true" />[/xml]

At this point, your project is eligible to use the composition thread, but no 2d elements or animations will take advantage of it by default; there is still work to be done, and there are currently some very tricky gotchas that can occur along the way. I will talk about these quirks using a custom BusyIndicator control as an example.

Configure BusyIndicator for GPU Acceleration

Let’s start by showing the xaml for a stripped down version of the Silverlight Control Toolkit’s BusyIndicator:

[xml]
<Style TargetType="local:BusyIndicator">
<Setter Property="IsTabStop" Value="False"/>
<Setter Property="OverlayStyle">
<Setter.Value>
<Style TargetType="Rectangle">
<Setter Property="Fill" Value="Black"/>
<Setter Property="Opacity" Value="0.5"/>
</Style>
</Setter.Value>
</Setter>
<Setter Property="ProgressBarStyle">
<Setter.Value>
<Style TargetType="ProgressBar">
<Setter Property="IsIndeterminate" Value="True"/>
<Setter Property="Height" Value="15"/>
<Setter Property="Margin" Value="8,0,8,8"/>
</Style>
</Setter.Value>
</Setter>
<Setter Property="DisplayAfter" Value="00:00:00.1"/>
<Setter Property="HorizontalAlignment" Value="Stretch"/>
<Setter Property="VerticalAlignment" Value="Stretch"/>
<Setter Property="HorizontalContentAlignment" Value="Stretch"/>
<Setter Property="VerticalContentAlignment" Value="Stretch"/>
<Setter Property="Template">
<Setter.Value>
<ControlTemplate TargetType="local:BusyIndicator">
<Grid>
<VisualStateManager.VisualStateGroups>
<VisualStateGroup x:Name="VisibilityStates">
<VisualStateGroup.Transitions>
<VisualTransition GeneratedDuration="0:0:0.3">
<VisualTransition.GeneratedEasingFunction>
<ExponentialEase EasingMode="EaseInOut"/>
</VisualTransition.GeneratedEasingFunction>
</VisualTransition>
</VisualStateGroup.Transitions>
<VisualState x:Name="Hidden">
<Storyboard>
<DoubleAnimation Duration="0" To="0" Storyboard.TargetProperty="(UIElement.Opacity)"
Storyboard.TargetName="overlay" />
<DoubleAnimation Duration="0" To="0" Storyboard.TargetProperty="(UIElement.Opacity)"
Storyboard.TargetName="busycontent" />
<ObjectAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.Visibility)"
Storyboard.TargetName="overlay">
<DiscreteObjectKeyFrame KeyTime="0">
<DiscreteObjectKeyFrame.Value>
<Visibility>Collapsed</Visibility>
</DiscreteObjectKeyFrame.Value>
</DiscreteObjectKeyFrame>
</ObjectAnimationUsingKeyFrames>
<ObjectAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.Visibility)"
Storyboard.TargetName="busycontent">
<DiscreteObjectKeyFrame KeyTime="0">
<DiscreteObjectKeyFrame.Value>
<Visibility>Collapsed</Visibility>
</DiscreteObjectKeyFrame.Value>
</DiscreteObjectKeyFrame>
</ObjectAnimationUsingKeyFrames>
</Storyboard>
</VisualState>
<VisualState x:Name="Visible">
<Storyboard>
<DoubleAnimation Duration="0" To="1" Storyboard.TargetProperty="(UIElement.Opacity)"
Storyboard.TargetName="busycontent" />
<DoubleAnimation Duration="0" To="0.5" Storyboard.TargetProperty="(UIElement.Opacity)"
Storyboard.TargetName="overlay" />
<ObjectAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.Visibility)"
Storyboard.TargetName="overlay">
<DiscreteObjectKeyFrame KeyTime="0">
<DiscreteObjectKeyFrame.Value>
<Visibility>Visible</Visibility>
</DiscreteObjectKeyFrame.Value>
</DiscreteObjectKeyFrame>
</ObjectAnimationUsingKeyFrames>
</Storyboard>
</VisualState>
</VisualStateGroup>
<VisualStateGroup x:Name="BusyStatusStates">
<VisualState x:Name="Idle">
<Storyboard>
</Storyboard>
</VisualState>
<VisualState x:Name="Busy">
<Storyboard RepeatBehavior="Forever">
<DoubleAnimation Duration="0:0:1.5" From="-180" To="180"
Storyboard.TargetProperty="(UIElement.RenderTransform).(CompositeTransform.Rotation)"
Storyboard.TargetName="LoadingIcon"/>
</Storyboard>
</VisualState>
</VisualStateGroup>
</VisualStateManager.VisualStateGroups>
<Rectangle x:Name="overlay" Style="{TemplateBinding OverlayStyle}" />
<ContentPresenter x:Name="busycontent">
<Grid HorizontalAlignment="Center" VerticalAlignment="Center">
<Grid.Effect>
<DropShadowEffect ShadowDepth="0" BlurRadius="4"/>
</Grid.Effect>
<Image x:Name="LoadingIcon"
Source="/MyProject;component/Assets/Images/refresh-yellow.png" Stretch="None"
RenderTransformOrigin="0.5,0.5" Margin="0,2,10,0" HorizontalAlignment="Right"
VerticalAlignment="Center">
<Image.RenderTransform>
<CompositeTransform/>
</Image.RenderTransform>
</Image>
</Grid>
</ContentPresenter>
</Grid>
</ControlTemplate>
</Setter.Value>
</Setter>
</Style>
[/xml]

The goal here is to have the LoadingIcon image spin around while the VisualState is set to Busy, and we want that animation to be processed on the composition thread. The first step to this goal is to add set the CacheMode property on the desired element to “BitmapCache”. This tells the framework that the element (and it’s subtree of elements, if any) should be cached on the GPU. The first thing I tried was setting the CacheMode on the LoadingIcon element, since that was the animation that I wanted to be as consistent and smooth as possible. To verify whether or not it worked, I used a very archaic approach: Set IsBusy on the control to true, and then immediately invoke Thread.Sleep. In theory, if the animation is running on the composition thread, then a Thread.Sleep on the primary UI thread would not freeze the animation. Much to my dismay, the animation froze as soon as Thread.Sleep was executed. It turns out that the current Silverlight 5 beta has a bug in which an Image element will fail to properly cache as expected on the GPU. I received this answer by posting on the Silverlight 5 beta forum, which you can read about here.

Along with this bug, there are some rules about CacheMode that must be followed for your animations to be eligible for GPU caching, and thus the composition thread. I had a quick correspondence with Gerhard Schneider from Microsoft today, and this is what he had to say:

BitmapCache is not [currently] working on Image elements. It’s a bug that we will still try to fix for the release. Other than that, here are roughly the rules for BitmapCache and independent animations (animations on the composition thread).

You can set BitmapCache on any UIElement (ignoring the bug on image element – and I believe media element). This will render that element’s subtree into a video memory off-screen surface that we can then compose with independent animations (independent animations = animations on composition thread). To make sure things are fast we also cache the tree behind and in front of the element that has BitmapCache set.
Note that BitmapCache has no additional benefit when being nested. Only the BitmapCache flag closest to the root is respected.

Regarding independent animations, we currently support transform, perspective, and opacity animations. However, under certain tree configurations, we sometimes have to disable them. For example if used under a complex clip (complex being non-rectangular), we disable independent animations and BitmapCache. The exact rules will be published when we release SL5 since some of this is still changing.

So I followed his advice, and made a few changes:

  • Moved the CacheMode property up to the parent ContentPresenter element.
  • Removed the DropShadow effect, because it is not eligible for caching.

Unfortunately, I still encountered a freezing animation on Thread.Sleep. After another email to Gerhard, he had the answer:

If the animation is targeting a property under the cached element, it has to invalidate the cache and you will not get an independent animation. You need to move the animation to the Grid element. This assume that you are animating the CompositeTransform in the example below.

So the solution was to move the animation to the ContentPresenter, and point the VisualState animations to the ContentPresenter as well. Finally, the animation continued to spin on Thread.Sleep! And this makes sense, because the entire element and subtree will be cached as a bitmap, so any animations or unsupported settings on children will invalidate the bitmap. I think this is a key point that other Silverlight 5 articles failed to mention, and can be a real gotcha if just starting out with this stuff.

So here are the modified parts of BusyIndicator style that will run successfully on the Composite thread. Notice the DoubleAnimation points to busycontent:

[xml]
<ControlTemplate TargetType="local:BusyIndicator">
<Grid>
<VisualStateManager.VisualStateGroups>
<VisualStateGroup x:Name="BusyStatusStates">
<!– … –>
<VisualState x:Name="Busy">
<Storyboard RepeatBehavior="Forever">
<DoubleAnimation Duration="0:0:1.5" From="-180" To="180"
Storyboard.TargetProperty="(UIElement.RenderTransform).(CompositeTransform.Rotation)"
Storyboard.TargetName="busycontent"/>
</Storyboard>
</VisualState>
</VisualStateGroup>
</VisualStateManager.VisualStateGroups>
<Rectangle x:Name="overlay" Style="{TemplateBinding OverlayStyle}" />
<ContentPresenter x:Name="busycontent" CacheMode="BitmapCache" RenderTransformOrigin="0.5,0.5">
<Grid Height="25" Width="25"
HorizontalAlignment="Center"
VerticalAlignment="Center">
<Image x:Name="LoadingIcon"
Source="/MyProject;component/assets/images/refresh-yellow.png"
Stretch="None" HorizontalAlignment="Right"
VerticalAlignment="Center">
</Image>
</Grid>
<ContentPresenter.RenderTransform>
<CompositeTransform />
</ContentPresenter.RenderTransform>
</ContentPresenter>
</Grid>
</ControlTemplate>
[/xml]

Taking full advantage of GPU Acceleration

Even though I was able to improve performance by moving some common animations to the composite thread, I am still weary of the work involved in order to take advantage of this feature in a large application. I really think that there should be an easier way to detect whether an element and it’s subtree are eligible for GPU caching, because there are quite a few limitations (not to mention bugs) that get in the way of implementation. Hopefully by the time Silverlight 5 goes RTW, the bugs will be ironed out and all limitations will be fully documented.

Further Reading

There were quite a few articles and blog posts that helped me get started with GPU acceleration in Silverlight 5. I strongly recommend reading through them if you are interested in this topic:

Silverlight 5: The Undisputed Champion for LOB Applications

In a previous post, I blogged about the struggles we’ve had with Silverlight that led me to consider WPF as an alternative for line of business applications. With the announcements made at today’s Silverlight Firestarter event about all the Silverlight 5 features, it seems like any incentive to make that switch to WPF has been eradicated. This post aims to take a look at some of those new features, and show how the gap is quickly closing between Silverlight 5 and WPF. Additionally, I’ll briefly dive into Silverlight’s exclusive features that ease development for LOB applications, yet are not present in WPF.

Performance!

Based on the Firestarter event, SL5 is cooking up even more improvements to the rendering pipeline that should alleviate scenarios where the visual tree gets overloaded. While exact details have yet to emerge, it seems that Silverlight will support an immediate graphics mode that will run rendering through the GPU. This was one of our largest blockers with Silverlight going forward, and it is a relief to hear that they have brought this WPF capability into the Silverlight realm.

DataBinding

If you’ve been working with Silverlight for awhile, you probably know that certain databinding scenarios can be a real pain. You probably also know that WPF has databinding facilities that solve these problems with ease. Well, the pain ends soon because Silverlight 5 now supports nearly all the features of it’s dying father, WPF.

One of my favorites is the Ancestor RelativeSource addition. As John Papa demonstrated in Firestarter today, there is a common scenario when performing bindings inside of a DataTemplate that don’t match your current DataContext. More often than not, you are forced to replicate properties/commands/collections on your child view models in order to satisfy your binding requirements, and often this can lead to redundant and confusing code. With Ancestor RelativeSource, you can find the DataContext of a parent element that is higher up in the visual tree hierarchy, and bind directly to it.

Another great feature is being able to bind to style setters. Often times there is a requirement to change visual properties of controls. But if you are trying to bind a style setter within Silverlight, it wasn’t possible without a heaping portion of hacky magic.

Finally, implicit data templates will make your annoying-converters-library much smaller. Often times when binding to a collection of dissimilar items, you are forced to use converters to get your data templates to render the intended layout. Of course, the more dissimilar your bound collection is, the more your library of one-off converters will grow. With implicit date templates, you have the option to bind to a list of different types, and let WPF dynamically determine which data template to use. This way your presentation logic can stay in xaml, and not in converter code.

Windows Integration

Another selling point for WPF is it’s ability to interactive with the windows environment from within your application, such as calling unmanaged libraries and Win32 APIs. A typical scenario is the process of exporting data to excel. In Silverlight right now, this workflow consists of the following steps:

  1. Prepare the xlsx filestream
  2. Offer the user SaveFileDialog to persist it to the local file system.
  3. The user types out an entire filename (since there is no way to specify a default in SL4)
  4. The user manually opens the file either from explorer/desktop or through excel

With Silverlight 5, you can skip the last three steps and open the file directly into Excel automatically! This is a huge time saver when your users constantly want to view their data in excel. And of course, there are so many more possibilities; in the Firestarter keynote, they demonstrated a Silverlight app that connected to a windows program to automatically import data off of a USB device. Rich OS integration used to be a major selling point for WPF, but with the advent of SL5, it has evaporated.

So far I’ve only touched on features that serve to even the playing field between Silverlight and WPF, but what puts SL5 over the top are the exclusive technologies and support that are actively being built around it:

  • WCF RIA Services – As I mentioned previously, there is no support in WPF for this great technology, and it isn’t coming anytime soon (if ever)
  • Fluid UI – Silverlight continues to build on the ability to easily create more natural applications. We see the beginnings of this with SL4′s ListBox support for Fluid UI, where you can effortlessly create transitions when adding and removing items. SL5 goes deeper and adds LoadTransitions. From the Firestarter event, this looks like the capabilities of the SL Toolkit’s TransitioningContentControl have been integrated into the VSM and animation system.
  • SL Toolkit – The state of WPF’s toolkit is a sad sad thing. With no release for the last 10 months (not even for the .NET 4 release), it looks to be nearly abandoned. Comparitively, there has been a .NET 4 release of the Silverlight toolkit, and it is much more feature complete and stable than it’s WPF counterpart.
  • Theming – This goes hand in hand with the previous bullet point, but there are a whole bunch of great themes continuing to be pumped out of Redmond. The WPF community has had to rely on rogue developers gracious enough to port the themes over.

Summary

There are many other improvements scheduled for Silverlight 5 that can help with LOB applications (Out of Browser, Testing tools, Text, Printing), but I’ll let the big dawgs like Scott Gu cover those details. For now, I think it’s safe to say that WPF is dead. But don’t fret; this just means that all of it’s most advantageous features are being reincarnated into future versions of Silverlight.

After last months scare that Microsoft might abandon Silverlight, I think it is safe to say that speculation could not be further from the truth; Silverlight is here to stay. It continues to get faster, leaner, stronger — and there is no better technology in the present or foreseable future that can be used to develop amazing line of business applications. With Silverlight 5′s release next summer and the beta still a few months out, there are going to be a swarm of developers clamoring to get their paws on these features (myself included). Until then, happy coding :)

WPF 4 vs. Silverlight 4: Which Do You Choose?

For the past year, I have led an initiative at my company to use Silverlight 4 and WCF RIA Services on the majority of our user interface projects. While these projects have been largely successful, we began running into serious performance problems when trying to squeeze large amounts of data onto our views. The problem wasn’t fetching the data, but rather scrolling and viewing the data in our DataGrids.

One of the largest optimizations we made was to set the windowless parameter back to false. But the root cause pointed to an overload of the Silverlight visual tree. Simply put, there’s only so much you can show on the screen at once, even with virtualization turned on. With a giant excel-like editable datagrid that sprawls the screen, there’s no getting around visual tree overload (especially when scrolling). We evaluated every commercial SL datagrid on the market, and chose the default SDK DataGrid from MS because it fared really well in our scenario.

In the end, we performed many optimizations to get the product to “acceptable” performance, but this motivated me to begin researching WPF as an alternative. I did a massive comparison of all the different WPF/Silverlight datagrids, and one theme remained the same: WPF has much more visual rendering power than Silverlight. These findings have made me dead-set on trying my best to see if WPF can be the platform-of-choice for major apps going forward, but it turns out that the performance honeymoon was short lived.

As I began to build a prototype in WPF, the giant glaring gaps quickly began to emerge. The first was that WPF doesn’t support RIA Services. This is a huge negative, and unless I find a hack to somehow get a client support for WPF, it will force us back to silverlight. The second big one was the validation story. Notice those “free” beautifully animated popout error messages in the datagrid and dataform controls in Silverlight? These are nowhere to be found in WPF. I have yet to find anyone who has replicated them in WPF, and I haven’t had time to try myself. Also, WPF does not have INotifyDataErrorInfo, so any async validation is going to be far less elegant.

And that isn’t the end. There are other smaller issues that deter me. For instance, WPF does not have Fluid UI like SL4, so there is no clean approach to adding natural animations to your data collections. I was hopeful that Blend’s FluidMoveBehavior would fill the gap, but I haven’t been able to get it working even in the most simple scenarios. Also, the WPF toolkit is in a sad state right now, with no updates since 9 months ago (aside from the Ribbon control). Silverlight is definitely getting more attention in this arena.

I really want to harness the power of WPF, but at this point it feels like Silverlight makes it much easier of an experience for developing LOB applications. With that said, I am still going to forge ahead and give my best shot at trying to make WPF work. But if I had to make a recommendation now, I would say that developers in similar circumstances should go for Silverlight first, while always keeping a hawk’s eye on performance. If your application isn’t doing a ton of CRUD on a remote data source, and thus validation and RIA Services aren’t necessary, then WPF becomes an easier sell.

In the event that I do make some breakthroughs with WPF, I’ll be sure to update this post. Stay tuned…

Using Google Closure Templates with ASP.NET MVC in Visual Studio 2010

Client-side templates have become a vital component of AJAX-driven websites. The web is undoubtedly trending towards sites that load content dynamically after the page loads, rather than all at once in the initial page request. This pattern allows web pages to become lighter and more responsive, which translates to a better experience for the user. However, the conventional server side templating and databinding techniques that web developers typically use aren’t as effective anymore. That is why there are so many JavaScript template solutions that have popped up in recent years:

Google’s Closure Templates is the new kid on the block, and the subject of this article. One might wonder: Why do we need yet another JavaScript templating solution? The main advantage that sets Closure Templates apart from the other libraries is the included compiler. Other templating solutions either parse a string of special template syntax, or traverse through actual html elements with special markup or extra classes. When making light use of client-side templates, these solutions can work very well. But, as you start to build applications with extremely large datasets and complex templating, then performance starts to become an issue. These scenarios are where Closure templates begin to shine. Instead of directly using the template that you write, you run it through the compiler to output JavaScript functions that you can use in your code. This process is advantageous on two levels; the first is that the JavaScript functions outputted from the compiler are going to be optimized and extremely fast. Secondly, the compiler can also create server side compatible templates as well, so that you can write your templates once and be able to use them either in JavaScript or in server-side code. Unfortunately, the current version of the compiler can only generate Java code, and there is no option for .NET languages such as C#. This is a bummer for .NET developers, but even without C# templates, there is still great value in lightning-fast, compiled JavaScript templates.

Creating your first Closure Template in Visual Studio 2010

Let’s start with a new ASP.NET MVC Project:

vs2010-new-mvc-project

You can create a Web Application project if you are more comfortable with that type of project structure. Because we will be dealing with just JavaScript, it shouldn’t matter.

After creating a project, download the compiler and JavaScript utility library. Extract the zip file, and copy both SoyToJsSrcCompiler.jar and soyutils.js to your templates directory:

soy-compiler

You will see in the screenshot above that I have changed the Build Action of SoyToJsSrcCompiler.jar to “None”, and Copy to Output Directory is set to “Do not copy”. This ensures that the compiler is not compiled into the dll, and the file is not copied to the output folder. The latter is especially useful when using Visual Studio’s publish feature, because the compiler is not necessary when deploying your website.

Now that we have the required files in place, let’s go ahead and create a soy template file, example.soy. This is the file that will contain one or more templates using Closure’s template syntax. After creating a template in this file, we will then use the compiler to generate a JavaScript representation of the template that we will reference in our html page. Every soy file should have the three following components, in the following order:

  • A namespace declaration.
  • One or more template definitions.
  • A newline at the end of the file.

Go ahead and enter the following example template in your example.soy file:

{namespace closure.examples}

/**
* Says hello to a person.
* @param name The name of the person to say hello to.
*/
{template .helloName}
Hello {$name}!
{/template}

Make sure that your soy file is encoded as ANSI, rather than UTF-8. Even though Google says UTF-8 should be supported, in Windows 7 x64 (and maybe other Windows operating systems and versions) this is currently not the case. And if you create a file within Visual Studio, it will encode the file as UTF-8 and you will get the following exception when compiling a template:

Exception in thread "main" com.google.template.soy.base.SoySyntaxException: In file example.soy: Tag 'namespace' not at start of line.

closure-compiler-encoding-exception

If you are unsure that your file is saved with the proper encoding, just open it up in notepad, and select File > Save As to see or change the encoding.

After we’ve included the necessary files in our project, it’s time to make compiling templates a little more user-friendly. The jar file is pretty annoying to call from the command line and manually change the parameters for each of your different soy files. Visual Studio has a perfect solution for this, and it’s called External Tools. This feature allows you to set up a VS menu item and seamlessly run commands from the IDE with just the press of a button. To do this, first click on Tools > External Tools from the menu:

external-tools

From here, you can create a new menu item with the following parameters:
Title: Compile Closure Template
Command: C:Program Files (x86)Javajre6binjava.exe
Arguments: -jar "$(ItemDir)SoyToJsSrcCompiler.jar" --outputPathFormat $(ItemFileName).js $(ItemFileName)$(ItemExt)
Initial Directory: $(ItemDir)

(Thanks to Tj Stewart for this tip)

Note that your Command entry may differ, depending on where your java installation resides on your computer. Here is what it looks like:

closure-template-tool

Once you have this setup, compiling your soy templates is as simple as selecting the soy file in solution explorer and pressing the “Compile Closure Template” button in the Tools menu:

invoke-closure-template-tool

When you attempt to compile a template, make sure to have your output window open. If there are any exceptions that occur during the compilation, this is where they will be displayed. If the compilation was successful, you should now have a new example.js file in the Templates directory. Note that it won’t be added to the project automatically, so you’ll have to add it yourself. Subsequent compiles for the same soy file will only require this step once.

Now let’s create a page to utilize the new template, and call it example.html. We could create an MVC view, but it really isn’t necessary since we are only dealing with JavaScript:

[sourcecode language="xml"]
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title></title>
<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js"></script>
<script type="text/javascript" src="Scripts/Templates/soyutils.js"></script>
<script type="text/javascript" src="Scripts/Templates/example.js"></script>
<script type="text/javascript">
$(function () {
$(‘#wrapper’).html(closure.examples.helloName({ name: ‘Sam’ }));
});
</script>
</head>
<body>
<div id="wrapper"></div>
</body>
</html>

[/sourcecode]

I’m using a little jQuery to make my life easier here, and notice that I am passing JSON with the Closure template parameters as the properties of the object. Your result should look something like this:

closure-result

In summary, Google Closure Templates provide a scalable solution to client side templating by providing an understandable syntax in the form of soy files, with the ability to compile those templates into fast and efficient JavaScript functions that you can use in your pages. Even though there is no current support for reusing Closure templates in C#, there is still value in utilizing this solution in a Visual Studio project, especially when the External Tools feature makes compilations so easy and convenient.

I hope this article serves as a starting point to getting up and running in Visual Studio 2010. In future articles, I will be delving into more complex examples and explanations of Closure Templates in real-world scenarios.

Working with Projections and DTOs in WCF Data Services

For those of you who haven’t been following “Project Astoria”, you’ve been missing out on some pretty exciting technology. WCF Data Services (formerly known as ADO.NET Data Services) is a stack on top of WCF that enables the creation and consumption of REST-based data services for the web. There are several intriguing features in the upcoming release that coincides with .NET Framework 4, including:

  • Projections: Since CTP2, the Data Services URI API supports the ability to work with a subset of properties on an Entity.
  • Data Binding: The client library now supports two-way data binding for applications built with Silverlight or WPF technologies.
  • Row Count: You can now retrieve the total number of entities in a set, without having to fetch all of the entities within that set.

The ability to create projections and shape your data directly on the URL can be quite useful. Oftentimes, only a few properties of an entity are necessary, and WCF Data Services makes it easy to achieve an efficient query that returns precisely the data you need. For an example, take the following Entity Framework data model:

Example Entity Model

Let’s say you only wanted to return a list of Employee Names, along with the City they live in. If exposed via a WCF Data Service, you could could get exactly this data by using the $select parameter in your querystring:

http://domain/data.svc/Employees?$select=Id,FirstName,LastName,Address/Id,Address/City&$expand=Address

This request will only return the Employee’s Id, FirstName, and LastName and the Address’s Id, and City. The $expand query parameter with the Address value is telling the service to eager load the Address object, which is necessary be able to return the projected properties that we need from it. Any developer who has been creating AJAX-intensive websites will tell you that ability to achieve this granularity without any extra work is extremely useful.

More Complex Scenarios

The projections and expansion features are extremely useful, but at some point you will undoubtedly run into a scenario that isn’t supported via the querystring API. WCF Data Services allows you to expose custom operations on your service to facilitate these scenarios. Using example data model above, let’s say we would want the Employee’s Id, Name, and the Count of the OptionsApprovals from the StockOptionApproval relationship. Since there is currently no way to project the count of a child relationship in the querytstring API (don’t misunderstand the $count parameter as I first did; it will not help here), you would need to expose a custom service operation. Intuitively, you might code something like this:
[csharp]
[WebGet]
public IEnumerable<EmployeeOptionApprovalCountDTO> EmployeesWithOptionApprovalCount()
{
var employees = CurrentDataSource.Employees
.Select(x => new EmployeeOptionApprovalCountDTO {
x.ID,
x.Name,
OptionsApprovalCount = x.Children.Count()
})
.ToList();

return employees;
}
[/csharp]

Unfortunately, this will not work. At least not when your WCFService inherits from an Entity Framework ObjectContext:
[csharp]
public class EmployeeService : DataService<EntityContext>
{
public static void InitializeService(DataServiceConfiguration config)
{
config.UseVerboseErrors = true;
// more config options here…
}
}
[/csharp]

When your service inherits from an ObjectContext, WCF Data Services will use the ObjectContextDataProvider. A current limitation is that you can only return entities from the custom service operations that you expose. In other words, if you try to return a data transfer object (regular CLR class), your service will fault with:

'Unable to load metadata for return type 'System.Collections.Generic.IEnumerable`1[EmployeeOptionApprovalCountDTO]' of method 'System.Collections.Generic.IEnumerable`1[EmployeeOptionApprovalCountDTO] GetEntityWithCount(Int32)'

Disappointingly, you can’t even use the config’s RegisterKnownType method during InitializeService, because it isn’t used when ObjectContextDataProvider is chosen as the Provider for your service. So what other options do you have for this seemingly straightforward use case?

The next thing I tried was to create an Entity in the model that isn’t mapped to a table in the database. This was a dead end, because when using the unmapped entity in the entity data model as the DTO to return from the service, I received this exception:

The server encountered an error processing the request. The exception message is 'Service operation 'UnmappedEntityDTO' produces instances of type 'UnmappedEntityDTO', but there are no visible entity sets for that type. The service operation should be hidden or a resource set for type 'UnmappedEntityDTO' should be made visible.'.

So what are we left with to try? Fortunately, the entity framework has the concept of complex types, which technically are to be used as properties of entities. However, the ObjectContextDataProvider will allow you to return a complex type in the EntityDataModel from your service operation. You can create them in the Model Browser:

example-ef-complex-type

As if jumping through all those hoops weren’t enough, there is one more gotcha to consider. When building a Linq to Entities query, you cannot use a complex type as part of your where clause! You will receive the following exception:

The entity or complex type 'ComplexTypeAsDTO' cannot be constructed in a LINQ to Entities query.

So instead, you must first use an anonymous type, invoke ToList() to ensure your query gets executed, and then finally transform your anonymous type to your complex type so that your service is able to return your objects:
[csharp]
return CurrentDataSource.Employees
.Select(x => new{
Id = x.Id,
Name = x.Name,
OptionsApprovalCount = x.OptionsApprovals.Count
})
.ToList()
.Select(x => new EmployeeWithOptionsApprovalCount {
Id = x.Id,
Name = x.Name,
OptionsApprovalCount = x.OptionsApprovalCount
});
[/csharp]

Summary

WCF Data Services enables rapid development and the ability to easily expose your entity model. And when combined with the entity framework in conjunction with it’s new POCO support, you don’t have to sacrifice your n-tier architecture and proper separation of concerns. However, there are still some counterintuitive practices that must be used in order to handle seemingly-basic scenarios. Hopefully, by the time the final product is delivered, there will be better support for these situations. But for now, I’m happy that I found a decent workaround!

WordPress: Blogging Platform of Choice

Back in 2008, I started this blog with BlogEngine.net as my platform of choice. Despite trying my hardest to stay in the world of .NET for my open source software needs, I succumbed to the allure of the almighty WordPress. Yes yes, I know I made every excuse in the world to justify my decision, but at some point I had to face the facts: WordPress is hands down the best blogging platform available.

So I’m not going to talk about all the great features out of the box, or how awesome the themes and plugins are, or how easy it is to upgrade to the latest version, or how php in general has the open source software realm on lock down. I’m just simply going to keep my head down, take my licks, and try to write my next blog without getting distracted by how awesome WordPress really is ;)

DynamicLoader Plugin – Dynamically Loading ASP.NET User Controls with jQuery

Live Demo | Download Sample Solution (52.77 kb)

ASP.NET User Controls are pretty useful. They allow functional modules of code and markup to be encapsulated in such a way that reuse is convenient and easy, without sacrificing the power or integration of the ASP.NET model. As we move into an era of AJAX-driven websites, this modularity is still very important. Can the user controls that we all know and (mostly) love still help with this encapsulation, despite being engineered before AJAX techniques emerged? I think they can. But at this point in the ASP.NET timeline, user controls are in need of some help.

The Fundamental Problem

With AJAX, more and more content is being dynamically loaded by the client on demand, rather than being included in the original http response. This fundamental change conflicts with the user control’s current usage model of being attached to the control heirarchy during the page lifecycle on the server–either through markup, or using the Page.LoadControl method in code. For user controls to be useful in the world of AJAX and demand loading, we would need to find a way to load them outside of the normal page lifecycle, and use javascript to get the the rendered HTML and inject it into our page. Luckily, this isn’t too difficult to accomplish.

The following example illustrates a basic scenario in which we have a page that uses jQuery to load a user control when a button is clicked. The calling page is pretty simple:

jquery-load-user-control

As you can see, all I’ve done in jQuery’s ready event handler is wire up the click event of the button to make an ajax call to a web service. The data result that is returned from the ajax call is then added into the content div on the page. Let’s take a look at the web service that we are calling in that code:

renderuc-svc-operation

This is a pretty standard WCF Ajax service, which uses a utility class called UserControlUtility by calling its RenderAsString method, which looks like this:

renderuc-utility

In the helper method above, I’m simply accepting a parameter called path, which allows us to use the LoadControl method in the usual way. If you are worried about the potential baggage of instantiating a Page object for every User Control that is rendered, don’t lose too much sleep over it. A page object that is instantiated like this is pretty lightweight, and doesn’t go through the heavy ASP.NET Page lifecycle that occurs on a normal page load.

This is pretty nifty for simple scenarios, but big challenges arise when the application gets more complicated. What happens when the user control has javascript of it’s own? Well ordinarily you would have a few options. One option that I defaulted to when starting out with jQuery was to write all the JavaScript in the calling page, and just apply it to the user control’s html when it has been loaded. This is not the best solution, because you lose the encapsulation that we were trying to maintain with user controls in the first place. The second solution is to include the javascript within the user control within another jQuery ready handler. This works out much better, because the client functionality gets to be bundled with the markup for clean encapsulation. Additionally, the included javascript will be excuted when the control is rendered on the parent page, thanks to jQuery. But has this solved all of our problems? Not quite.

Mo Javascript, Mo Problems

To illustrate how problems can arise with that last solution, let me give an example. Say you are developing a real-time stock-screening application. In this application, you have a user control called StockItemRow.ascx that had quite a bit of javascript associated with it. You also have a page called Screener.aspx that periodically polls a web service for matching stocks, and adds those stocks to the grid via a rendered instance of StockItemRow.ascx. And suppose the user control had a good deal of javascript bundled with it, and also a few nested user controls of its own (with their own javascript, of course). What were to happen if you dynamically added 50 or 60 rows over a few minutes? You may see what I am trying to get at here.

The problem is that the JavaScript is being loaded over and over on each successful new request for data, simply because it is bundled inside the rendered user control. As you load more and more data onto the page, this becomes a bigger and bigger waste. Plus, unless you write your javascript very carefully, each new dynamically loaded user control could end up applying it’s javascript to other user controls that have already been loaded. Yuck! In order to solve these problems, it is going to take a little more work.

The first issue we need to solve is the repititious loading of unnecessary javascript. To do this, we need to separate it out from the user control into it’s own js file. Some may argue that we are losing encapsulation here, but I disagree. I think just if an aspx page can have both a file for markup and a codebehind file, then a user control can have both a markup file and a js file (and it’s codebehind file, for that matter). After we have separated it out, we have freed ourselves to be able to load the javascript file once, while still rendering the user control multiple times.

But just separating the javascript out doesn’t solve our problems. We need to somehow “register” a single instance of javascript on the page, and have any dynamically loaded user controls use just that instance. Additionally, we need to make sure that the javascript is capable of being applied to individual user controls, without affecting other user controls that have already been wired up and loaded on the page.


Enter jQuery.DynamicLoader

jQuery.DynamicLoader is a simple jQuery plugin I wrote that allowed a parent page to dynamically load User Controls and their corresponding script files on demand. Here is the way it works:

  • You reference jQuery.DynamicLoader on your parent page.
  • Create an ajax service that renders user controls, similar to the example I showed earlier
  • Anytime you want to load a user control on that page, call $.dynamicLoader.loadUC() with the appropriate options. This will fetch the rendered user control, and its corresponding javascript file. If the javascript is being loaded for the first time, DynamicLoader will register that instance as the singleton for all subsequent user controls of that same type.
  • The javascript instance is then invoked with the rendered user control as its UI context.

Let’s jump into the sample project I’ve created as an example:

DynamicLoader (52.77 kb)

The project contains a single page Default.aspx, and two user controls, TableWidget.ascx and CellWidget.ascx. The purpose of the project is to demonstrate a page intitally with no content, and how we can dynamically load several tiers of user controls, each with their own scripts. We start from a single button on Default.aspx that will dynamically load a new TableWidget every time it is clicked. Inside each TableWidget is a button gets wired up to load its own user controls, this time CellWidgets. Each CellWidget has its own javascript that needs to execute as well.

Here is how the first button is wired up with jQuery:

invoke-dynamic-loader

As you can see, it is calling DynamicLoader’s loadUC function, which takes a few options: ucName is the path to the user control to be loaded, queryString allows you to pass parameters to your UserControl to help render it on the server, and eventBindings allows you to handle events that are fired within the usercontrol.

As I mentioned earlier, the javascript in your user control needs to be registered before it can be used. Don’t get scared off now, it’s only two extra lines of code:

dynamic-loader-compatible-script

We have a standard jQuery ready handler, and inside that we call DynamicLoader’s registerUC function. This will only be loaded once, even if multiple TableWidgets are loaded afterwards. Also notice the event triggers. You can create as many different types of events as your heart’s desire, as long as the parent knows the name of the event (and references it in the eventBindings option). I’ve included ready, busy, unbusy, and finished in the default options. The ready event is one that I consider critical, because it is the event that the parent will use to attach the user control to the page.

Here is a screenshot of the demo:

dynamic-loader-demo

Live Demo | DynamicLoader.zip (52.77 kb)

You can see that there are buttons on the CellWidget that do some trivial javascript actions, and also a button that demonstrates an event being monitored by the parent user control.

Room for Improvement

DynamicLoader is more of a proof concept than a full-fledged plugin, and there are several areas in which it needs to be improved:

The event chaining needs some work. I haven’t really tested it with events that bubble more than two layers up.
Right now it doesn’t look like jQuery’s $.getScript is caching the scripts. I’d like to rewrite a version of getScript that does.
The registration system is very rigid at this point. It expects you to pass in a user control’s path, and the script needs to register itself with that exact path as its key (without the extension).
So there you have it. This technique allows you to treat your User Controls as neatly encapsulated modules that are loaded and configured on demand. Plus, there is no limit to nesting your user controls, and they will load efficiently and within their own context. Finally, you don’t have to break communication with your user controls. The event binding allows a separation of concerns, while still being able to act on important things that happen within the user control.

I hope you find this technique useful, and please let me know if you have suggestions or improvements!

Client side templates using ASP.NET, JQuery, Chain.js, and TaffyDB

For those that know me, it goes without saying that I’ve fallen head over heels for jQuery. It just makes working with javascript much easier and cleaner than ever before, and opens the door to so many new possibilities that were just to cumbersome even a few years ago. Of course, with new possibilities always comes new challenges. One of those important challenges is client side templating.

For me, there was an evolutionary process before getting to the point where I realized client side templating was important. It all started out in the olden days of using ASP.NET Web Controls. With Web Controls, the idea was pretty simple: You bind your control to a datasource, and any events you wanted to take action on in that control would require a postback to the server. These postbacks were hugely interruptive, especially when performing lots of different manipulations on a single page. So when “Atlas” and the UpdatePanel came along, it looked to be exactly what we needed. We build pages in the exact same way we always did, slap an UpdatePanel around the whole thing, and magically all our problems would go away. Well as we’ve all found out by now, it’s not that simple. The heaviness and waste of an entire page lifecycle on each async postback, combined with the limitations of the postback model itself made it useful only in certain situations (like removing the flicker from an existing website with heavy postbacks).

When I first started using jQuery, I quickly realized that I could circumvent the whole postback model in favor of ajax and REST services, but I still wasn’t ready to give up the WebControls like ListView and GridView that I’ve used for so long. That was when I had the idea of calling a service to render a UserControl on the server, and pass back the html to insert onto my page. After googling “render user control service”, I quickly found out that I wasn’t the only one thinking of this idea, and off I went to get it working.

After using it for a few scenarios, the drawbacks of this approach started to be more apparent. The first drawback was getting access to the actual data. Sure, you have rendered view, but what if you wanted to do more with a particular record in that view, like show it on a google map? Do you try and extract what you need from the rendered html, or do you have a separate ajax call to get just the data? Another challenging drawback was control state, like scroll location or pagination. For example, if you render a UserControl with a ListView and PagerControl on it, you get paging buttons that are absolutely useless. I got around this by using jQuery to intercept the click events of those page buttons to call the rendering service, but these issues had me feeling like I was hacking the solution just to get the control and accessibility I needed. Finally, there is the bloat factor. Yes, this solution is much much lighter than the UpdatePanel + postback model, but not nearly as light as just passing the data down as JSON and rendering that data on the client.

Which brings me to the second-to-last step in the journey to client templates: rendering the data yourself with jQuery. Rick Strahl has a good blog post of this technique, but I’ll give an example of my own. You simply start with a block of html on your page that will serve as your template, like the following markup:

client-template

After you pull your JSON data from the web service, you clone the html for each record and inject the appropriate data, adding event handlers as you go:

manual-databinding

This approach has a lot of things going for it — bandwidth efficiency, lightweight processing, and templates that are understandable. However, once you’ve coded a couple of these scenarios, two things become obvious. The first is that you are writing a hell of a lot of repetitive code just to match the correct element in the template with its value. The second thing is that you have to write lots more code if you want to do any kind of synchronization between two views that share the same data. How can we solve these two problems?

Enter Chain.js

Chain.js is a jQuery plugin that aims to solve the templating and data synchronization shortcomings that I mentioned above. My favorite demo synchronizes two lists together, which really shows the simplicity and power of Chain.js in both templating and synchronizing two views dynamically. You start with the following markup:

chain-template

We have two lists here, each with a template called “item”. The aim of the demo is to populate the “persons” list with a dataset, and then link the “filtered” list to show the items that have been filtered out of the first list. Here is the code that makes it all happen:

chain-databinding

In the above code, the items plugin is initialized on the persons list with some data, and then .chain() is called to automatically bind the data to the template inside of the persons list. The default Chain.js data binder looks for classNames that correspond to the property on the JSON element. Chain.js automatically looks for the first item inside of the parent element to use as the template, but you can configure precisely where your template is with the anchor option. Next, a handler is setup on input’s keyup event so that we can use filter function to filter the list of items on every keystroke. Finally, we take our second list and link it to the collection ‘hidden’ on our first list, and call chain to initiate the data binding.

You can see the final result on the Chain.js Demo Page.

Chain.js is a great concept and great code, and it works very well in these demos. But it is still a project in it’s infancy, and that is apparent when you try to scale up the amount of data you are binding to, and the amount of views that are linked to the data. The performance literally grinds to a halt, even on the fastest machines.

Solving the current limitations of Chain.js

When analyzing the source code for Chain.js, I realized I could do something fairly simple right off the bat to speed up data-binding: use innerHTML instead of jQuery’s clone() method to create each templated item. The internals of the binding function “$update” in Chain.js look something like this (as of version 0.1):

chain-mod1

Here is the modified code that uses innerHTML to create all the items ahead of time (without jQuery’s clone), and the loop picks each item out of the list for data binding:

chain-mod2-innerhtml

Just this change alone made data binding about twice as fast in my testing. But with large datasets, that wasn’t enough of an increase. So I dug some more and I identified another issue: when synchronizing a series of views, the collections that you subscribe to are filtered with jQuery. So therefore, your master view is always required to create a corresponding DOM element for each data item. This means that if you have a thousand data items, but you only want your views to render and show a subset of those thousand items, then you still have to create a master view, which in turn requires a DOM element for each data item. That is a ton of extra work! So I set out to modify the code so that a master object is created that doesn’t have any dom elements that correspond to the data. Then, all other views are a linked to this object. But without the DOM, how are we supposed to use jQuery to filter our collections? The answer is, we don’t.

Enter Taffy DB

Taffy DB is a lightweight javascript library that acts as a thin data layer on the client. This is exactly what is needed to quickly select the elements needed for the subscribing views. By circumventing the creation of dom elements and jQuery selectors to build the collections, performance increased approximately four-fold beyond the innerHTML modification! Unfortunately, the implementation of Taffy DB required changes in many areas of Chain.js, so I do not have any code samples. But if there is demand, I may either post my modified code, or coordinate with the author of Rizqi Ahmad, the creator of Chain.js, to implement some of these ideas.

Other Client Template Libraries

It is pretty obvious that many developers see client templating as important going forward. This is evident in the amount of Client Template Libraries that are cropping up. Just recently, Microsoft released a preview of ASP.NET AJAX 4.0, which includes client-side template rendering. This seems promising, but I haven’t had much time to play around with it. There’s also jTemplates, PURE, and LightningDOM, although they are more geared towards just client side templating, rather than trying to tackle synchronization (like Chain.js and ASP.NET AJAX 4.0). If I’ve missed any other libraries, please let me know!

Microsoft Data Access Components(MDAC) and .Net Framework 3.5 SP1

Recently I upgraded to .Net Framework 3.5 SP1. The service pack introduced a host of bug fixes, along with some pretty cool enhancements too.

After installing the service pack, one of my web applications immediately started throwing the following exception:

The .Net Framework Data Providers require Microsoft Data Access Components(MDAC). Please install Microsoft Data Access Components(MDAC) version 2.6 or later.

The code that was throwing the exception tries to open an OleDb connection to read data from an excel spreadsheet:

string conStr = String.Format("Provider=Microsoft.Jet.OLEDB.4.0;Data Source={0};Extended Properties=Excel 8.0;", filePath);
OleDbConnection objConn = new OleDbConnection(conStr);
objConn.Open();
...

It seems to me that this component may have been included in previous versions of the framework, and has since been removed. Anyway, I downloaded and installed the latest MDAC, and sure enough, it fixed the problem!