I am speaking at the MSEvent – Windows Developer Event to be held at Navy Pier in Chicago, IL. Here is some more information. I hope to see you there! Register here Windows 8 Developer Event.

Windows 8 Developer Event in Chicago
Date April 26, 2012
Time 9:00AM – 5:00PM

Navy Pier
600 East Grand Avenue

Chicago, IL

FREE Event

Seating is limited,
so register today.


Windows reimagined.Learn everything you need to start building Metro-style apps for Windows today at our free, full-dayWindows Developer Event.We’ll show you how to use Visual Studio to code fast, fluid, immersive and beautiful Metro-style applications in HTML5/JavaScript, XAML/C# and C/C++.

Your investments in these languages carry forward, making Windows a no-compromise platform for developers.Whatever language you choose, your app gets deep integration with the Windows shell, including notifications, live tiles, deep links, and contracts with other apps. And now you can build once and support all Windows customers, no matter what type of PC they have—from tablets to laptops to convertibles to desktops.Seating is limited and registration is not guaranteed. Secure your spot today!Notes

This free event is brought to you by Microsoft. However, you are responsible for booking and funding your own travel and accommodations. Please notethat there is limited space available for this event, so be sure to register early.


I will be speaking on April 18th at the Chicago .NET Users Group (CNUG) meeting. Scott Seely will be speaking for the Chicago Azure Cloud Users Group just before me at the same location. Come see us both! My topic and abstract is below.

Get Yourself Kinect-ed! – Greg Levenhagen

Kinect development used to mean hacking without any support, but now that the Kinect SDK, Kinect for Windows hardware and commercial support for non-XBOX 360 applications has been released, the full power of the Kinect is unleashed. Come see how to start developing with the Kinect, using its hardware features and what the Kinect SDK provides.

I will be speaking the weekend of April 14th and 15th in Minneapolis at the Twin Cities Code Camp! The information about my session is below. If you’re interested in attending, please register now using EventBrite. For more event information, please go to the TCCC website. I hope to see you there.

Parallel Programming in .NET and Azure – Greg Levenhagen

Parallel programming remains a difficult task, but the .NET framework keeps making things easier for developers. With the various constructs available, like the addition of the Task Parallel Library in .NET 4, it is important to know what is appropriate for different situations. Devices continue to gain cores and the cloud offers easily distributed computing, so developers should understand how to utilize the environments. Come for a walk-through of how and when to use these constructs, whether that is in the cloud, a mobile device, desktop application or the web.

Concurrent programming aims to execute potentially different pieces of work at the same time and parallel computing aims to reduce a piece of work into multiple pieces to execute concurrently. Parallel computing has been around for decades, but it has remained a difficult problem. It aims to support multi-core, multi-CPU and distributed systems. The continued work for supporting these paradigms is great, because it has always been an issue to keep user interfaces responsive and handle operations quickly. In recent years, consumption of asynchronous services has exploded and parallel operations to some extent. As devices continue to grow and gain a significant amount of cores, expect parallel and asynchronous functionality to become more and more common.

With the release of .NET 4, Microsoft added a new namespace under the System.Threading namespace called System.Threading.Tasks. All of the previous threading abilities are still available, but the new additions provide a different way to work with multi-threaded constructs.

With the evolution of multi-threaded capabilities within the .NET framework, things can get a little confusing. Here is a brief history with some notes.

What are Threads?

When running an application or program, it is executing a process. Multiple processes can be executed concurrently, like when an email client and a web browser are used at the same time. A look at what is going on inside of a process shows threads. In much of the same way that a process is to an operating system, a thread is to a process. The major difference is that processes do not share any memory between them and threads have that ability in a restricted fashion. Synchronization of mutable objects between threads is often a pitfall for developers.

Threads in .NET 1.0+

In the System.Threading namespace, the Thread class exists along with with other classes to provide fine-grained multi-threading capabilities for the .NET framework. This means thread synchronization support and data access using classes like the following.

  • Mutex – Used for inter-process communication (IPC).
  • Monitor – Provides a mechanism that synchronizes access to objects.
  • Interlocked – Provides atomic operations for variables that are shared by multiple threads.
  • AutoResetEvent – Notifies a waiting thread that an event has occurred.
  • Semaphore – Limits the number of threads that can access a resource or pool of resources concurrently. (Added with .NET 2.0)
  • and many more.

The ThreadPool in .NET 1.0+

The System.Threading.ThreadPool class provides a group of system managed threads to the application and is often a more efficient way to handle multi-thread programming. This is because it helps the developer avoid having threads spending a majority of the time waiting on another thread or sitting in a sleep state. In order to execute a method in the ThreadPool, you can call QueueUserWorkItem that specifies the method to execute and an optional parameter for any data that is needed. It is also allowed to use an asynchronous delegate with BeginInvoke and EndInvoke methods. The method specified will begin executing when a Thread in the ThreadPool becomes available.

Each process is limited to one system level thread pool. The ThreadPool manages background threads, so if all foreground threads exit, then the ThreadPool will not keep the application alive. In this case, finally and using blocks are not handled correctly, so using a method call to Join, Wait or Timeout to avoid this should be practiced.

The default ThreadPool limits are:

  • .NET 2.0 – 25 threads
  • .NET 3.5 – 250 threads
  • .NET 4.0 (32-bit) – 1,023 threads
  • .NET 4.0 (64-bit) – 32,768 threads

The BackgroundWorker in .NET 2.0+

When there is a need to execute some non-UI process, the System.ComponentModel.BackgroundWorker will spawn a new thread and execute the operations. It offers a progress indicator to report back to the calling thread, forwarding of exceptions and canceling the processing. If the situation warrants using multiple BackgroundWorkers though, consideration should be given to the Task Parallel Library.

The BackgroundWorker class follows the event-based asynchronous pattern (EAP). The EAP means it abstracts and manages the multi-threading capabilities while allowing for basic interaction via events. When the words Async and Completed are appended to a class methods, it may be implementing some form of the EAP. Another similar pattern is the asynchronous programming model (APM), which use Begin and End methods. Both the EAP and APM work well with the new .NET 4.0 construct Task that is mentioned later in this post.

Besides directly using the BackgroundWorker implementation, it can also be subclassed. It would involve overriding the OnDoWork method and handling of the RunWorkerCompleted and ProgressChanged events in the consuming class. The subclass provides a better level of abstraction for a single asynchronously executing method.

BackgroundWorker uses the ThreadPool, so it benefits from improvements that have been made with later versions of the .NET framework. Using the ThreadPool also means that calling Abort should not be done. In a case where you want to wait for completion or cancellation of the BackgroundWorker, you may want to consider using the Task Parallel Library.

The Dispatcher in .NET 3.0+

The System.Windows.Threading.Dispatcher class is actually single-threaded in that it doesn’t spawn a new thread. It places operations in a state to execute when BeginInvoke is called, but it executes on the same thread that it’s instantiated in and then communicates to another thread. The reason for the Dispatcher’s existence boils down to thread affinity. A user interface Control or DependencyObject is forced to strictly belong to its instantiating thread. For example, in the case of Windows Presentation Foundation (WPF) and Silverlight , the Dispatcher class allows a non-UI thread to “update” a TextBox control’s Text property on the UI thread through marshaling.

Parallel LINQ (PLINQ) in .NET 4.0+

PLINQ is a parallel implementation of the Language-Integrated Query (LINQ) pattern. Just like LINQ to Objects and LINQ to XML, PLINQ can operate against any IEnumerable or IEnumerable<T>. The namespace for PLINQ is System.Linq.ParallelEnumerable, but this implementation of LINQ doesn’t force parallel operations on everything. There are additional methods too, such as:

  • AsParallel – This is how to enable PLINQ. If the rest of the query can be parallelized, it will do so.
  • AsSequential<T> – Will turn a previously parallelized query back into a sequential one.
  • AsOrdered – Preserve ordering until further instructed by something like an order by clause or AsUnordered<T>.
  • AsUnordered<T> – No longer preserve ordering of the query.
  • ForAll<T> – Allows for processing in parallel instead of requiring a merge back to the consuming thread.
  • Aggregate – Provides intermediate and final aggregation of results.
  • and a few more.

The AsParallel method is very straightforward to try, as the call is made directly on the data source within a LINQ query or foreach loop.

PLINQ does not guarantee that the query will be executed in parallel. It checks if it is safe to parallelize and if doing so will likely provide an improvement. If the check conditions are not satisfied, it will execute the query sequentially. By using the optional WithExecutionMode, PLINQ will guarantee parallel execution.

Exceptions are bundled up together from all the threads and placed into an AggregateException, which you can then iterate through to process each exception or flatten into a single exception. This special type of exception is used in other areas of .NET 4.0 multi-threading too.

Custom partitioning is offered for a way that a developer can specify how the data source should be parallelized. For instance, if the data source contains hundreds of thousands of rows and testing shows that some of the threads are only given a few hundred rows, a partition can be created on the data source accordingly. Custom partitioning is done to the data source before the query and the resulting object replaces the data source within the query.

The Task Parallel Library (TPL) in .NET 4.0+

The TPL is a collection of constructs in the System.Threading and System.Threading.Tasks namespaces. This post has split PLINQ out above because it resides in a different namespace, but some documentation refers to them together. Some of the same characteristics mentioned in PLINQ apply here too, since PLINQ actually reduces a query into Tasks (defined below).

As mentioned in the opening statement, all of the fine-grained constructs of multi-threading are still available, so what is the need for the TPL? The goal is to make parallel programming easier. The TPL uses an algorithm to dynamically update during the execution for the most effective utilization of resources. Under PLINQ, there is a section on custom partitioning, which is to override the built in partitioning. Collectively, the TPL handles the the default partitioning of data, the ThreadPool, cancellations and state.

“The Task Parallel Library is the preferred way to write multi-threaded and parallel code.” – MSDN

The Parallel Class

The Parallel class provides the methods For, Invoke and ForEach to process operations in parallel.

  • For – parallel equivalent of the for keyword
  • Invoke – executes Action delegates in parallel
  • ForEach – parallel equivalent of the foreach keyword
The Task Class

Tasks offer much of the same functionality as previous solutions like Thread, but also include continuations, cancellation tokens, forwarding and context synchronization. The Parallel class reduces its For, ForEach and Invoke methods into Tasks. A Task is semantically the same as a Thread, but does not require creating an operating system thread, because it is put into the ThreadPool. Also, multiple Tasks may run on the same Thread. That can be confusing at first, but it offers a lot of flexibility.

In comparison to directly using the ThreadPool by starting a parallel execution of a method by calling QueueUserWorkItem, the Task class has a Factory property which is of type TaskFactory. From the TaskFactory, a call to StartNew and passing in a lambda expression will queue up the work. By default, Tasks will be placed in the ThreadPool. If the option for a long running operation is specified, the Task will be created on a separate thread. Regardless, these ways of creating a Task mean that execution will be in a background thread. If you want a reference to the Task created, the StartNew method returns a Task object. Using that object, traditional functionality is available for things like waiting. Tasks also support setting up parent-child relationships which can be very useful for wait and continuation operations.


Continuations provide a way to execute code after a Task completes with the option of using the result from the Task. Continuations provide a very nice fluent syntax that resembles having a Completed method tied to an event. The fluent syntax isn’t required if there is a reference to the Task so determination can be done of what to continue with. Multiple continuations can be specified to handle error conditions, cancelations and normal completion of a Task. One of the major goals of continuations was to provide a situation for non-blocking on waiting for a Thread or Task to complete.

Parallel Primitives and Data Structures in .NET 4.0+

Thread Safe Collections

In .NET 1.0, the System.Collections namespace provides some built in support for thread safety with the Synchronized property. Microsoft states that the implementation is not scalable and is not completely protected from race conditions.

With .NET 2.0, the System.Collections.Generic namespace brought generic collections, but removed any thread safe capabilities. This means the consumer needs to handle all synchronization, but the type safety, improved performance, and scalability are significant.

Bring in .NET 4.0 and the addition of System.Collections.Concurrent. This provides even better performance than the .NET 2.0 collections and provides a more complete implementation of thread safety than .NET 1.0. This namespace includes:

  • BlockingCollection<T>
  • ConcurrentBag<T>
  • ConcurrentDictionary<TKey, TValue>
  • ConcurrentQueue<T>
  • ConcurrentStack<T>
Lazy Initialization

Lazy initialization of objects comes into play when those operations are expensive. The application may not require the expensive objects, so using these new constructs can have a significant impact on performance.

  • Lazy<T> – Thread-safe lazy initialization.
  • ThreadLocal<T> – Lazy initialization specific to each thread.
  • LazyInitializer – Alternative to Lazy<T> by using static methods.

The Barrier class is interesting because it allows for Threads to have checkpoints. Each Barrier represents the end of some block or phase of work. At at checkpoint, it allows for specifying a single thread to do some post-block work before continuing. Microsoft recommends using Tasks with implicit joins if the Barriers are only doing one or two blocks of work.

SpinLock and SpinWait

The SpinLock and SpinWait structs were added because sometimes it’s more efficient to spin than block. That may seem counter-intuitive, but if the spin will be relatively quick it can produce major benefits in a highly parallelized application because of not having to perform a context switches.

Miscellaneous Notes

Deadlock Handling

In the case of a deadlock, SQL Server will determine one of the offending threads and terminate it. This doesn’t happen within .NET. A developer must take careful consideration to avoid deadlocks and should use timeouts to help avoid this situation.

Forcing Processor Affinity

In some cases, running in parallel can be problematic. One way to avoid such complications is to set the processor affinity on through the Process class. Call the GetCurrentProcess method and then use the ProcessorAffinity property to get or set the affinity as needed.

Debugging Parallel Applications in Visual Studio 2010+

There are two new additional debugging windows added with Visual Studio 2010. They are the Parallel Stacks window and Parallel Tasks window. The Parallel Stacks window provides a diagram layout based on either Tasks or Threads and lets the developer see the call stack for each construct. The Parallel Tasks window resembles the Threads window with a grid of all Tasks.

Task Parallel Library (TPL) in .NET 4.5

The most notable changes in .NET 4.5 will most likely be the async and wait keywords. There is a major focus on making continuations as fast as possible and the wait keyword will hopefully simplify writing continuations.


There is a lot of support for multi-threaded, parallel and asynchronous programming within the .NET framework. Hopefully you now have a better understanding of what each construct does. The latest addition, the TPL, has some major improvements and should be added to your toolbox. Pay attention to what .NET 4.5 will provide as it aims to make things even easier.

Further Reading

Although the default compare tool built in with Visual Studio/TFS is adequate, I find situations where third party solutions have automatically resolved situations that the default tool did not. I have used a variety of tools and prefer KDiff3, but WinMerge, Beyond Compare, and others are all great options.

After setting up Visual Studio to use a custom tool, it will automatically use it for both comparisons and merges without prompting you. The automatic calculation of conflicts on check-ins is still done on TFS. This means it may list a certain number of conflicts, but when you open them locally with your custom tool they may all be resolved automatically. It may seem like it’s an extra step, but if you were just using the built-in tool, you would have handle all of the conflicts.

Go to Tools -> Options from the main menu.

Then go to Source Control -> Visual Studio Team Foundation Server.

From here click on Configure User Tools button, which will bring up the following popup.

After clicking Add, you will be given the opportunity to provide information for using a custom merge and compare tool.

You can see from the drop down there are two choices that correspond to our tools. Third party tools use different command line arguments for specifying compare and merge operations, so we need to specify them separately.

  • For the extension textbox, enter “.*” without the quotes. This will use the tool you specify for all of the file types.
  • Select the operation for Compare or Merge.
  • In the Command textbox, click the ellipse button and find the executable to the third party utility you have installed.

You can see the right arrow is clicked in the figure above and it expands a help menu. This menu is specifying how Visual Studio will provide the files, options and information to the utility.

For this example I’m using KDiff3. The arguments I use for compare are:

    %1 –fname %6 %2  –fname %7

For the merge arguments, I use:

    %3 –fname %8 %2  –fname %7 %1 –fname %6 -o %4

Tools Used:

  • KDiff3 0.9.95
  • Visual Studio 2010 SP1

Setting Up Additional Configurations and the Files

When creating a new web application project in ASP.NET 4, the web.config is included as expected, but so are two additional files as seen below.


If you don’t see them initially, expand the collapsed entries by clicking on little black arrow to the left of the Web.config filename.

What each file does will be disgusted later on, but first let’s see how to add more files. If you right-click on the web project and go through the menu to add a new item and select Web Configuration File, you will not get a file automatically associated like the Debug and Release files seen above. It will look like the following.


To have things work nicely, the build configurations should be set up first. Go through the toolbar or whatever process you like best to edit the build configurations.


This will provide us with the popup to create new build configurations.


In the next window, fill in the settings that are appropriate for your new configuration. For example, Testing, Staging, etc.


After doing this and reloading the project file, the Web.Testing.config still doesn’t fit into the collapsible Web.config area. This is because it was added before the build configuration, so make sure to add the build configurations first! If you find yourself in this situation, you can manually edit the project file to create the association.

After opening up the project file for editing and searching for Web.config, we find the following.

  <Content Include="Web.config">
  <Content Include="Web.Debug.config">
  <Content Include="Web.Release.config">

Notice the difference for the Debug and Release files? Where is the Testing entry? Searching for it in the project file, it’s found as a normal file entry.

    <Content Include="Web.Testing.config" />

You can manually remove the ItemGroup entry for the Testing file and create a Content entry that mimics the Debug and Release entries.

  <Content Include="Web.config">
  <Content Include="Web.Debug.config">
  <Content Include="Web.Release.config">
  <Content Include="Web.Testing.config">

After saving the changes and reloading the project file, the association for Testing is correct.


Generating the Transformed Configuration Files

At this point, it’s easy to see that the middle portion of the filename corresponds the build configuration. What does it actually do? By default, the deploy will produce a transformed configuration file. This doesn’t happen for normal build operations and debugging, like hitting F5. Take a note here that the Web.Debug.config entry will not be transformed into your debugging web.config file when running in Visual Studio. Without the extension mentioned below, this would only be for deploying the application in debug mode. We will see with an extension that this is possible though. After setting up a Publish entry in One-Click Publish and deploying it to a local file system folder, the following can be seen when Show All Files is selected for the project.


Notice the path objTestingTransformWebConfig and then the original and transformed directories. Comparing the two Web.config entries at this point will show the differences, if any.

Using the Transformation Syntax to Produce Custom Settings per Build Configuration

There are a variety of ways to apply transformation, but the two I find myself using most often are Replace and SetAttributes. Here are some examples:

<?xml version="1.0"?>
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
      connectionString="Data Source=dbserver;Initial Catalog=dbname;User id=username;Password=password;"

  <appSettings xdt:Transform="Replace">
    <add key="RemoteServerIP" value="" />
    <add key="RemoteServerPath" value="." />

  <connectionStrings xdt:Transform="Replace">
    <add name="MyConnectionString" connectionString="Data Source=dbserver;Initial Catalog=dbname;User id=username;Password=password;" />
    <add name="CsvConnectionString" connectionString="Provider=Microsoft.Jet.OLEDB.4.0;Extended Properties='text;HDR=Yes;FMT=Delimited';Data Source="/>

The ELMAH connection string and mail settings are using the SetAttributes transform by matching on the name of the attribute. The result of these operations will change the attribute values for connectionString and to, respectively. For the appSettings, the Replace transform type is used to swap out the whole appSettings section. You could handle these in different ways, but I find that usually all or most of the appSettings values change per build configuration type, so I simply replace the whole thing rather than adding transform syntax to each line.

What this provides is a way to set any number of configuration changes based on the build configuration. As shown above, the connection strings don’t have to be worried about and changed when doing different deployments. You can set it and forget it, for the most part.

Please reference MSDN for the full documentation on transformation syntax: http://msdn.microsoft.com/en-us/library/ie/dd465326.aspx

That Works for ASP.NET web.config files, But What About the app.config files?

Unfortunately, it’s not directly built into Visual Studio/MSBUILD, but there is an excellent extension available for free called SlowCheetah – XML Transforms. This extension allows for not only doing these same types of operations on app.config files, but it allows for transformations of the config files during Visual Studio debugging. Bring on F5 integration! It even works for any XML file within your projects. For example, I often have a logging.debug.config and a logging.release.config to keep my web.config or app.config clean. This extension allows for transformations of those files perfectly and outputs them into the bin directory.

It also allows for previewing of the transformations when you right-click on one of the transformation XML files.


The built in functionality is long overdue and a much nicer implementation than using the Enterprise Library dynamic changes feature they added around version 3. There are some other tools available as well, but having it built in reduces the guess work and cross-training. Throw in the SlowCheetah extension and it’s pretty feature complete. Hopefully the Visual Studio team incorporates SlowCheetah’s features in vNext.

Happy Transformations!

ASP.NET MVC3 has built in mechanisms to support an Inversion of Control container for the framework. Let us look at how to use StructureMap and tie it into MVC3’s framework for use with providing Controller classes as well as how we would normally use it to provide our custom classes.

If you’re using NuGet, you’ll find it automatically includes a reference to WebActivator, which is something that allows for calling startup code without having to edit the global.asax file. This post is for those that don’t want to use WebActivator for whatever reason.

Inside of the global.asax file add a method to perform the container initialization.

		private static void InitializeContainer()
			// Configure IoC 

			StructureMapDependencyResolver structureMapDependencyResolver = new StructureMapDependencyResolver();

In the first bit of code, the DependencyRegistrar class contains configures StructureMap once and only once.

		public static void EnsureDependenciesRegistered()
			if (alreadyRegistered)

			lock (SyncronizationLock)
				if (alreadyRegistered)

				alreadyRegistered = true;

		private static void RegisterDependencies()
				x => x.Scan(
					scan =>
			// Place a breakpoint on the line and see the configuration of StructureMap.
			string configuration = ObjectFactory.WhatDoIHave();

The line in the InitializeContainer method instantiates an instance of the StructureMapDependencyResolver class.  This class implements the MVC frameworks IDependencyResolver interface, which is how StructureMap will interact with the MVC DependencyResolver.

	public class StructureMapDependencyResolver : IDependencyResolver
		#region Implementation of IDependencyResolver

		/// <summary>
		/// Resolves singly registered services that support arbitrary object creation.
		/// </summary>
		/// <returns>
		/// The requested service or object.
		/// </returns>
		/// <param name="serviceType">The type of the requested service or object.</param>
		public object GetService(Type serviceType)
			if (serviceType == null)
				return null;

				return ObjectFactory.GetInstance(serviceType);
				return null;

		/// <summary>
		/// Resolves multiply registered services.
		/// </summary>
		/// <returns>
		/// The requested services.
		/// </returns>
		/// <param name="serviceType">The type of the requested services.</param>
		public IEnumerable<object> GetServices(Type serviceType)
			return ObjectFactory.GetAllInstances(serviceType).Cast<object>();


At this point, your Controller classes will be provided using StructureMap. Not too bad! In order to use it for other types of injection, just go about it in the same process as usual. You can see below the use of injecting an ILog implementation. In the source code you can see that I use log4Net and a StructureMap Registry class.

    public class DemoController : Controller
        private static ILog log;

        /// <summary>
        /// Initializes a new instance of the <see cref="DemoController"/> class. 
        /// </summary>
        /// <param name="injectedLog">
        /// ILog implementation injected from the IoC container
        /// </param>
        public DemoController(ILog injectedLog)
            log = injectedLog;

		public ActionResult Index()
			log.Debug("The default page has been requested!");
			return View();

I wanted to make sure that my HttpScoped objects are disposed of when the request ends, so I also added a call to the built in StructureMap method ReleaseAndDisposeAllHttpScopedObjects.

		protected void Application_EndRequest(object sender, EventArgs e)

Not too much code in order to get the full power out of StructureMap with ASP.NET MVC.

Tools Used:

  • Visual Studio 2010
  • StructureMap
  • NuGet
  • log4Net

Download the Source:

StructureMap MVC3 Demo

Followin Up Reading on the Service Locator Anti-Pattern/Code Smell

I have found that having all projects expanded by default can be annoying, as I tend to open solution files when working in Microsoft Expression Blend.  This often leaves me having to collapse each project individually.  Within Visual Studio, I use PowerCommands for VS2010 and PowerCommands for VS2008 to provide the collapse-all functionality and it works great.

Since Blend 4 uses MEF, I set out on writing an extension to provide this functionality.  I learned how to begin with How to Hack Expression Blend.  The most helpful article I found was Building Extensions for Expression Blend 4 Using MEF by Timmy Kokke.  Following his startup example, I was able to use the debugger and figure out how to interact with Blend’s various parts.  I put forth some effort to have a dropdown menu or button added to the right of the Help entry, but I haven’t finished that exercise yet.  Ideally, I would like to have an entry added to the context menu of the Projects window when right-clicking on a solution or project file.  In the mean time, I have it working with a docking window.

Step 1: Installation

Download (DLL only):

Extract the extension DLL to the folder location “….Program Files (x86)Microsoft ExpressionBlend 4Extensions”.  You may have to right-click on the DLL and click the Unblock button.

Step 2: Using the Extension

After a successful installation, the Window dropdown menu should contain an entry for Collapse All Projects.  As you can see from the screenshot, I have configured the extension to use Ctrl+Shift+C as a shortcut.

Collapse All Projects Menu Entry

Once the menu item is selected, a popup window should appear.  This window is like the Projects, Properties, Assets, etc. windows within Blend, which means you can dock it.  I’ve chosen to dock it to the bottom as shown below.

Collapse All Project Window Docked

Hovering over the window shows the contents.  Simply click the button for the collapse all to be applied.

Collapse All Project Window Expanded

That is it!  If you find any bugs or issues with this, please let me know.  If you get around to making it a context menu item or as a main entry of the top dropdown menu, please share.

Tools Used:

  • Reflector
  • Snoop
  • I wanted to use Mole, but was developing in VS2010.  It doesn’t appear there is a compatible version yet.  Speaking of which, I would really like to see Mole for Silverlight.

Download the Source:

In order build a solution using WCF RIA Services on a build server, a little tweak may be needed to the configuration depending on how your solution has evolved.

Using an example of a Silverlight client project and a server side web application using WCF RIA Services, we quickly identify the problem.  Whenever a change is made to the WCF RIA Services project, Visual Studio will update the Silverlight project and development continues.  On the build machine this isn’t the case, as Visual Studio isn’t used to initiate the build.  This means that the Silverlight project would fail to build.

Even though there is no need to specify the build order when using Visual Studio, the problem with WCF RIA Services on a build server goes away if the Silverlight project is forced to build after the WCF RIA Services project.

Right-click on the solution and select Project Dependencies. 

Solution Context Menu

The following window should appear.  Verify that the Silverlight project has a dependency on the WCF RIA Services project.

 Project Dependencies Window