I have the wonderful opportunity to present 5 sessions at DevLink 2012. If you’re going, I would like to meet up and chat. I will also attend (travel plans permitting) the Windows 8 Developer Camp hosted by Jennifer Marsman the day before. Registration for that event is separate from DevLink and can be found at http://win8devcampchat.eventbrite.com/.

Using Contracts to Integrate with the Windows 8 Experience

Contracts are a new addition with Windows 8 Metro apps that provide a great user experience. For example, users want to Share information in a variety of ways and Windows 8 Metro Contracts allow for that. Come learn about how these Contracts work and how to implement built-in Contracts like Search, Share, PlayTo and Settings.

Using Azure with Windows Phone and Windows 8!

With phones, tablets and other devices exploding in market share, it’s important to know what technologies and tools will help you develop better applications. These devices are often short on processing power and storage, which is where Azure can really help out. Come see what it’s like to use Azure with Windows Phone and Windows 8, including examples with push notifications, storage and authentication for both platforms and a Metro application using the Azure Service Bus.

Parallel Programming in .NET and WinRT

Parallel programming remains a difficult task, but Microsoft keeps making things easier for developers. With the various constructs available, like the addition of the Task Parallel Library in .NET 4, it is important to know what is appropriate for different situations. Devices continue to gain cores and the cloud offers easily distributed computing, so developers should understand how to utilize the environments. Come for a walk-through of how and when to use these constructs, whether that is a mobile device, desktop application or the web. The examples will be C# focused, with JavaScript and F# discussed too.

Node.js, Java, PHP and Python with Azure? Why yes!

New languages and technologies keep finding their way to Azure. Need a Node.js web application? Want to use Eclipse and Java? Have an existing PHP application and want to move it to the cloud? All of these are possible and more! Come see how you can accomplish amazing things with Azure!

How to Ride the Service Bus with Azure

Do you like a loosely coupled architecture? Are you considering a hybrid application between the cloud and on-premise solutions? Are you building mobile applications with notifications and events? The Azure Service Bus can make your life much easier!

I have been picked to speak at That Conference! The two sessions that I’m lucky enough to present on are below.

Parallel Programming in .NET and WinRT

Parallel programming remains a difficult task, but Microsoft keeps making things easier for developers. With the various constructs available, like the additions of the Task Parallel Library in .NET 4 and async/await in .NET 4.5, it is important to know what is appropriate for different situations. Devices continue to gain cores and the cloud offers easily distributed computing, so developers should understand how to utilize the environments. Come for a walk-through of how and when to use these constructs, whether that is a mobile device, desktop application or the web. The examples will be C# focused, with JavaScript and F# discussed too.

Automation with the Azure Management API

Developers don’t want to repeat tasks! Take out the mundane work of managing the cloud manually and remove the chance for human error. Learn how the Azure Management REST API can be used for automating deployment changes, monitoring your application and more.

I will be speaking on April 18th at the Chicago .NET Users Group (CNUG) meeting. Scott Seely will be speaking for the Chicago Azure Cloud Users Group just before me at the same location. Come see us both! My topic and abstract is below.

Get Yourself Kinect-ed! – Greg Levenhagen

Kinect development used to mean hacking without any support, but now that the Kinect SDK, Kinect for Windows hardware and commercial support for non-XBOX 360 applications has been released, the full power of the Kinect is unleashed. Come see how to start developing with the Kinect, using its hardware features and what the Kinect SDK provides.

Concurrent programming aims to execute potentially different pieces of work at the same time and parallel computing aims to reduce a piece of work into multiple pieces to execute concurrently. Parallel computing has been around for decades, but it has remained a difficult problem. It aims to support multi-core, multi-CPU and distributed systems. The continued work for supporting these paradigms is great, because it has always been an issue to keep user interfaces responsive and handle operations quickly. In recent years, consumption of asynchronous services has exploded and parallel operations to some extent. As devices continue to grow and gain a significant amount of cores, expect parallel and asynchronous functionality to become more and more common.

With the release of .NET 4, Microsoft added a new namespace under the System.Threading namespace called System.Threading.Tasks. All of the previous threading abilities are still available, but the new additions provide a different way to work with multi-threaded constructs.

With the evolution of multi-threaded capabilities within the .NET framework, things can get a little confusing. Here is a brief history with some notes.

What are Threads?

When running an application or program, it is executing a process. Multiple processes can be executed concurrently, like when an email client and a web browser are used at the same time. A look at what is going on inside of a process shows threads. In much of the same way that a process is to an operating system, a thread is to a process. The major difference is that processes do not share any memory between them and threads have that ability in a restricted fashion. Synchronization of mutable objects between threads is often a pitfall for developers.

Threads in .NET 1.0+

In the System.Threading namespace, the Thread class exists along with with other classes to provide fine-grained multi-threading capabilities for the .NET framework. This means thread synchronization support and data access using classes like the following.

  • Mutex – Used for inter-process communication (IPC).
  • Monitor – Provides a mechanism that synchronizes access to objects.
  • Interlocked – Provides atomic operations for variables that are shared by multiple threads.
  • AutoResetEvent – Notifies a waiting thread that an event has occurred.
  • Semaphore – Limits the number of threads that can access a resource or pool of resources concurrently. (Added with .NET 2.0)
  • and many more.

The ThreadPool in .NET 1.0+

The System.Threading.ThreadPool class provides a group of system managed threads to the application and is often a more efficient way to handle multi-thread programming. This is because it helps the developer avoid having threads spending a majority of the time waiting on another thread or sitting in a sleep state. In order to execute a method in the ThreadPool, you can call QueueUserWorkItem that specifies the method to execute and an optional parameter for any data that is needed. It is also allowed to use an asynchronous delegate with BeginInvoke and EndInvoke methods. The method specified will begin executing when a Thread in the ThreadPool becomes available.

Each process is limited to one system level thread pool. The ThreadPool manages background threads, so if all foreground threads exit, then the ThreadPool will not keep the application alive. In this case, finally and using blocks are not handled correctly, so using a method call to Join, Wait or Timeout to avoid this should be practiced.

The default ThreadPool limits are:

  • .NET 2.0 – 25 threads
  • .NET 3.5 – 250 threads
  • .NET 4.0 (32-bit) – 1,023 threads
  • .NET 4.0 (64-bit) – 32,768 threads

The BackgroundWorker in .NET 2.0+

When there is a need to execute some non-UI process, the System.ComponentModel.BackgroundWorker will spawn a new thread and execute the operations. It offers a progress indicator to report back to the calling thread, forwarding of exceptions and canceling the processing. If the situation warrants using multiple BackgroundWorkers though, consideration should be given to the Task Parallel Library.

The BackgroundWorker class follows the event-based asynchronous pattern (EAP). The EAP means it abstracts and manages the multi-threading capabilities while allowing for basic interaction via events. When the words Async and Completed are appended to a class methods, it may be implementing some form of the EAP. Another similar pattern is the asynchronous programming model (APM), which use Begin and End methods. Both the EAP and APM work well with the new .NET 4.0 construct Task that is mentioned later in this post.

Besides directly using the BackgroundWorker implementation, it can also be subclassed. It would involve overriding the OnDoWork method and handling of the RunWorkerCompleted and ProgressChanged events in the consuming class. The subclass provides a better level of abstraction for a single asynchronously executing method.

BackgroundWorker uses the ThreadPool, so it benefits from improvements that have been made with later versions of the .NET framework. Using the ThreadPool also means that calling Abort should not be done. In a case where you want to wait for completion or cancellation of the BackgroundWorker, you may want to consider using the Task Parallel Library.

The Dispatcher in .NET 3.0+

The System.Windows.Threading.Dispatcher class is actually single-threaded in that it doesn’t spawn a new thread. It places operations in a state to execute when BeginInvoke is called, but it executes on the same thread that it’s instantiated in and then communicates to another thread. The reason for the Dispatcher’s existence boils down to thread affinity. A user interface Control or DependencyObject is forced to strictly belong to its instantiating thread. For example, in the case of Windows Presentation Foundation (WPF) and Silverlight , the Dispatcher class allows a non-UI thread to “update” a TextBox control’s Text property on the UI thread through marshaling.

Parallel LINQ (PLINQ) in .NET 4.0+

PLINQ is a parallel implementation of the Language-Integrated Query (LINQ) pattern. Just like LINQ to Objects and LINQ to XML, PLINQ can operate against any IEnumerable or IEnumerable<T>. The namespace for PLINQ is System.Linq.ParallelEnumerable, but this implementation of LINQ doesn’t force parallel operations on everything. There are additional methods too, such as:

  • AsParallel – This is how to enable PLINQ. If the rest of the query can be parallelized, it will do so.
  • AsSequential<T> – Will turn a previously parallelized query back into a sequential one.
  • AsOrdered – Preserve ordering until further instructed by something like an order by clause or AsUnordered<T>.
  • AsUnordered<T> – No longer preserve ordering of the query.
  • ForAll<T> – Allows for processing in parallel instead of requiring a merge back to the consuming thread.
  • Aggregate – Provides intermediate and final aggregation of results.
  • and a few more.

The AsParallel method is very straightforward to try, as the call is made directly on the data source within a LINQ query or foreach loop.

PLINQ does not guarantee that the query will be executed in parallel. It checks if it is safe to parallelize and if doing so will likely provide an improvement. If the check conditions are not satisfied, it will execute the query sequentially. By using the optional WithExecutionMode, PLINQ will guarantee parallel execution.

Exceptions are bundled up together from all the threads and placed into an AggregateException, which you can then iterate through to process each exception or flatten into a single exception. This special type of exception is used in other areas of .NET 4.0 multi-threading too.

Custom partitioning is offered for a way that a developer can specify how the data source should be parallelized. For instance, if the data source contains hundreds of thousands of rows and testing shows that some of the threads are only given a few hundred rows, a partition can be created on the data source accordingly. Custom partitioning is done to the data source before the query and the resulting object replaces the data source within the query.

The Task Parallel Library (TPL) in .NET 4.0+

The TPL is a collection of constructs in the System.Threading and System.Threading.Tasks namespaces. This post has split PLINQ out above because it resides in a different namespace, but some documentation refers to them together. Some of the same characteristics mentioned in PLINQ apply here too, since PLINQ actually reduces a query into Tasks (defined below).

As mentioned in the opening statement, all of the fine-grained constructs of multi-threading are still available, so what is the need for the TPL? The goal is to make parallel programming easier. The TPL uses an algorithm to dynamically update during the execution for the most effective utilization of resources. Under PLINQ, there is a section on custom partitioning, which is to override the built in partitioning. Collectively, the TPL handles the the default partitioning of data, the ThreadPool, cancellations and state.

“The Task Parallel Library is the preferred way to write multi-threaded and parallel code.” – MSDN

The Parallel Class

The Parallel class provides the methods For, Invoke and ForEach to process operations in parallel.

  • For – parallel equivalent of the for keyword
  • Invoke – executes Action delegates in parallel
  • ForEach – parallel equivalent of the foreach keyword
The Task Class

Tasks offer much of the same functionality as previous solutions like Thread, but also include continuations, cancellation tokens, forwarding and context synchronization. The Parallel class reduces its For, ForEach and Invoke methods into Tasks. A Task is semantically the same as a Thread, but does not require creating an operating system thread, because it is put into the ThreadPool. Also, multiple Tasks may run on the same Thread. That can be confusing at first, but it offers a lot of flexibility.

In comparison to directly using the ThreadPool by starting a parallel execution of a method by calling QueueUserWorkItem, the Task class has a Factory property which is of type TaskFactory. From the TaskFactory, a call to StartNew and passing in a lambda expression will queue up the work. By default, Tasks will be placed in the ThreadPool. If the option for a long running operation is specified, the Task will be created on a separate thread. Regardless, these ways of creating a Task mean that execution will be in a background thread. If you want a reference to the Task created, the StartNew method returns a Task object. Using that object, traditional functionality is available for things like waiting. Tasks also support setting up parent-child relationships which can be very useful for wait and continuation operations.

Continuations

Continuations provide a way to execute code after a Task completes with the option of using the result from the Task. Continuations provide a very nice fluent syntax that resembles having a Completed method tied to an event. The fluent syntax isn’t required if there is a reference to the Task so determination can be done of what to continue with. Multiple continuations can be specified to handle error conditions, cancelations and normal completion of a Task. One of the major goals of continuations was to provide a situation for non-blocking on waiting for a Thread or Task to complete.

Parallel Primitives and Data Structures in .NET 4.0+

Thread Safe Collections

In .NET 1.0, the System.Collections namespace provides some built in support for thread safety with the Synchronized property. Microsoft states that the implementation is not scalable and is not completely protected from race conditions.

With .NET 2.0, the System.Collections.Generic namespace brought generic collections, but removed any thread safe capabilities. This means the consumer needs to handle all synchronization, but the type safety, improved performance, and scalability are significant.

Bring in .NET 4.0 and the addition of System.Collections.Concurrent. This provides even better performance than the .NET 2.0 collections and provides a more complete implementation of thread safety than .NET 1.0. This namespace includes:

  • BlockingCollection<T>
  • ConcurrentBag<T>
  • ConcurrentDictionary<TKey, TValue>
  • ConcurrentQueue<T>
  • ConcurrentStack<T>
Lazy Initialization

Lazy initialization of objects comes into play when those operations are expensive. The application may not require the expensive objects, so using these new constructs can have a significant impact on performance.

  • Lazy<T> – Thread-safe lazy initialization.
  • ThreadLocal<T> – Lazy initialization specific to each thread.
  • LazyInitializer – Alternative to Lazy<T> by using static methods.
Barrier

The Barrier class is interesting because it allows for Threads to have checkpoints. Each Barrier represents the end of some block or phase of work. At at checkpoint, it allows for specifying a single thread to do some post-block work before continuing. Microsoft recommends using Tasks with implicit joins if the Barriers are only doing one or two blocks of work.

SpinLock and SpinWait

The SpinLock and SpinWait structs were added because sometimes it’s more efficient to spin than block. That may seem counter-intuitive, but if the spin will be relatively quick it can produce major benefits in a highly parallelized application because of not having to perform a context switches.

Miscellaneous Notes

Deadlock Handling

In the case of a deadlock, SQL Server will determine one of the offending threads and terminate it. This doesn’t happen within .NET. A developer must take careful consideration to avoid deadlocks and should use timeouts to help avoid this situation.

Forcing Processor Affinity

In some cases, running in parallel can be problematic. One way to avoid such complications is to set the processor affinity on through the Process class. Call the GetCurrentProcess method and then use the ProcessorAffinity property to get or set the affinity as needed.

Debugging Parallel Applications in Visual Studio 2010+

There are two new additional debugging windows added with Visual Studio 2010. They are the Parallel Stacks window and Parallel Tasks window. The Parallel Stacks window provides a diagram layout based on either Tasks or Threads and lets the developer see the call stack for each construct. The Parallel Tasks window resembles the Threads window with a grid of all Tasks.

Task Parallel Library (TPL) in .NET 4.5

The most notable changes in .NET 4.5 will most likely be the async and wait keywords. There is a major focus on making continuations as fast as possible and the wait keyword will hopefully simplify writing continuations.

Conclusion

There is a lot of support for multi-threaded, parallel and asynchronous programming within the .NET framework. Hopefully you now have a better understanding of what each construct does. The latest addition, the TPL, has some major improvements and should be added to your toolbox. Pay attention to what .NET 4.5 will provide as it aims to make things even easier.

Further Reading

Setting Up Additional Configurations and the Files

When creating a new web application project in ASP.NET 4, the web.config is included as expected, but so are two additional files as seen below.

image

If you don’t see them initially, expand the collapsed entries by clicking on little black arrow to the left of the Web.config filename.

What each file does will be disgusted later on, but first let’s see how to add more files. If you right-click on the web project and go through the menu to add a new item and select Web Configuration File, you will not get a file automatically associated like the Debug and Release files seen above. It will look like the following.

image

To have things work nicely, the build configurations should be set up first. Go through the toolbar or whatever process you like best to edit the build configurations.

image

This will provide us with the popup to create new build configurations.

image

In the next window, fill in the settings that are appropriate for your new configuration. For example, Testing, Staging, etc.

image

After doing this and reloading the project file, the Web.Testing.config still doesn’t fit into the collapsible Web.config area. This is because it was added before the build configuration, so make sure to add the build configurations first! If you find yourself in this situation, you can manually edit the project file to create the association.

After opening up the project file for editing and searching for Web.config, we find the following.

  <Content Include="Web.config">
    <TransformOnBuild>true</TransformOnBuild>
  </Content>
  <Content Include="Web.Debug.config">
    <DependentUpon>Web.config</DependentUpon>
    <IsTransformFile>True</IsTransformFile>
  </Content>
  <Content Include="Web.Release.config">
    <DependentUpon>Web.config</DependentUpon>
    <IsTransformFile>True</IsTransformFile>
  </Content>

Notice the difference for the Debug and Release files? Where is the Testing entry? Searching for it in the project file, it’s found as a normal file entry.

  <ItemGroup>
    <Content Include="Web.Testing.config" />
  </ItemGroup>

You can manually remove the ItemGroup entry for the Testing file and create a Content entry that mimics the Debug and Release entries.

  <Content Include="Web.config">
    <TransformOnBuild>true</TransformOnBuild>
  </Content>
  <Content Include="Web.Debug.config">
    <DependentUpon>Web.config</DependentUpon>
    <IsTransformFile>True</IsTransformFile>
  </Content>
  <Content Include="Web.Release.config">
    <DependentUpon>Web.config</DependentUpon>
    <IsTransformFile>True</IsTransformFile>
  </Content>
  <Content Include="Web.Testing.config">
    <DependentUpon>Web.config</DependentUpon>
    <IsTransformFile>True</IsTransformFile>
  </Content>

After saving the changes and reloading the project file, the association for Testing is correct.

image

Generating the Transformed Configuration Files

At this point, it’s easy to see that the middle portion of the filename corresponds the build configuration. What does it actually do? By default, the deploy will produce a transformed configuration file. This doesn’t happen for normal build operations and debugging, like hitting F5. Take a note here that the Web.Debug.config entry will not be transformed into your debugging web.config file when running in Visual Studio. Without the extension mentioned below, this would only be for deploying the application in debug mode. We will see with an extension that this is possible though. After setting up a Publish entry in One-Click Publish and deploying it to a local file system folder, the following can be seen when Show All Files is selected for the project.

image

Notice the path objTestingTransformWebConfig and then the original and transformed directories. Comparing the two Web.config entries at this point will show the differences, if any.

Using the Transformation Syntax to Produce Custom Settings per Build Configuration

There are a variety of ways to apply transformation, but the two I find myself using most often are Replace and SetAttributes. Here are some examples:

<?xml version="1.0"?>
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
  <elmah>
    <errorLog
      name="ElmahConnectionString"
      connectionString="Data Source=dbserver;Initial Catalog=dbname;User id=username;Password=password;"
      xdt:Transform="SetAttributes"
      xdt:Locator="Match(name)"
      />
    <errorMail
      name="ElmahMailSettings"
      to="no-reply@devtreats.com"
      xdt:Transform="SetAttributes"
      xdt:Locator="Match(name)"
      />
    </elmah>

  <appSettings xdt:Transform="Replace">
    <add key="RemoteServerIP" value="127.0.0.1" />
    <add key="RemoteServerPath" value="." />
  </appSettings>

  <connectionStrings xdt:Transform="Replace">
    <add name="MyConnectionString" connectionString="Data Source=dbserver;Initial Catalog=dbname;User id=username;Password=password;" />
    <add name="CsvConnectionString" connectionString="Provider=Microsoft.Jet.OLEDB.4.0;Extended Properties='text;HDR=Yes;FMT=Delimited';Data Source="/>
  </connectionStrings>
</configuration>

The ELMAH connection string and mail settings are using the SetAttributes transform by matching on the name of the attribute. The result of these operations will change the attribute values for connectionString and to, respectively. For the appSettings, the Replace transform type is used to swap out the whole appSettings section. You could handle these in different ways, but I find that usually all or most of the appSettings values change per build configuration type, so I simply replace the whole thing rather than adding transform syntax to each line.

What this provides is a way to set any number of configuration changes based on the build configuration. As shown above, the connection strings don’t have to be worried about and changed when doing different deployments. You can set it and forget it, for the most part.

Please reference MSDN for the full documentation on transformation syntax: http://msdn.microsoft.com/en-us/library/ie/dd465326.aspx

That Works for ASP.NET web.config files, But What About the app.config files?

Unfortunately, it’s not directly built into Visual Studio/MSBUILD, but there is an excellent extension available for free called SlowCheetah – XML Transforms. This extension allows for not only doing these same types of operations on app.config files, but it allows for transformations of the config files during Visual Studio debugging. Bring on F5 integration! It even works for any XML file within your projects. For example, I often have a logging.debug.config and a logging.release.config to keep my web.config or app.config clean. This extension allows for transformations of those files perfectly and outputs them into the bin directory.

It also allows for previewing of the transformations when you right-click on one of the transformation XML files.

Conclusion

The built in functionality is long overdue and a much nicer implementation than using the Enterprise Library dynamic changes feature they added around version 3. There are some other tools available as well, but having it built in reduces the guess work and cross-training. Throw in the SlowCheetah extension and it’s pretty feature complete. Hopefully the Visual Studio team incorporates SlowCheetah’s features in vNext.

Happy Transformations!

ASP.NET MVC3 has built in mechanisms to support an Inversion of Control container for the framework. Let us look at how to use StructureMap and tie it into MVC3’s framework for use with providing Controller classes as well as how we would normally use it to provide our custom classes.

If you’re using NuGet, you’ll find it automatically includes a reference to WebActivator, which is something that allows for calling startup code without having to edit the global.asax file. This post is for those that don’t want to use WebActivator for whatever reason.

Inside of the global.asax file add a method to perform the container initialization.

		private static void InitializeContainer()
		{
			// Configure IoC 
			DependencyRegistrar.EnsureDependenciesRegistered();

			StructureMapDependencyResolver structureMapDependencyResolver = new StructureMapDependencyResolver();
			DependencyResolver.SetResolver(structureMapDependencyResolver);
		}

In the first bit of code, the DependencyRegistrar class contains configures StructureMap once and only once.

		public static void EnsureDependenciesRegistered()
		{
			if (alreadyRegistered)
			{
				return;
			}

			lock (SyncronizationLock)
			{
				if (alreadyRegistered)
				{
					return;
				}

				RegisterDependencies();
				alreadyRegistered = true;
			}
		}

		private static void RegisterDependencies()
		{
			ObjectFactory.Initialize(
				x => x.Scan(
					scan =>
						{
							scan.TheCallingAssembly();
							scan.WithDefaultConventions();
							scan.LookForRegistries();
						}));
#if DEBUG
			// Place a breakpoint on the line and see the configuration of StructureMap.
			string configuration = ObjectFactory.WhatDoIHave();
#endif
		}

The line in the InitializeContainer method instantiates an instance of the StructureMapDependencyResolver class.  This class implements the MVC frameworks IDependencyResolver interface, which is how StructureMap will interact with the MVC DependencyResolver.

	public class StructureMapDependencyResolver : IDependencyResolver
	{
		#region Implementation of IDependencyResolver

		/// <summary>
		/// Resolves singly registered services that support arbitrary object creation.
		/// </summary>
		/// <returns>
		/// The requested service or object.
		/// </returns>
		/// <param name="serviceType">The type of the requested service or object.</param>
		public object GetService(Type serviceType)
		{
			if (serviceType == null)
			{
				return null;
			}

			try
			{
				return ObjectFactory.GetInstance(serviceType);
			}
			catch
			{
				return null;
			}
		}

		/// <summary>
		/// Resolves multiply registered services.
		/// </summary>
		/// <returns>
		/// The requested services.
		/// </returns>
		/// <param name="serviceType">The type of the requested services.</param>
		public IEnumerable<object> GetServices(Type serviceType)
		{
			return ObjectFactory.GetAllInstances(serviceType).Cast<object>();
		}

		#endregion
	}

At this point, your Controller classes will be provided using StructureMap. Not too bad! In order to use it for other types of injection, just go about it in the same process as usual. You can see below the use of injecting an ILog implementation. In the source code you can see that I use log4Net and a StructureMap Registry class.

	[HandleError]
    public class DemoController : Controller
    {
        private static ILog log;

        /// <summary>
        /// Initializes a new instance of the <see cref="DemoController"/> class. 
        /// </summary>
        /// <param name="injectedLog">
        /// ILog implementation injected from the IoC container
        /// </param>
        public DemoController(ILog injectedLog)
        {
            log = injectedLog;
        }

		public ActionResult Index()
		{
			log.Debug("The default page has been requested!");
			return View();
		}
    }

I wanted to make sure that my HttpScoped objects are disposed of when the request ends, so I also added a call to the built in StructureMap method ReleaseAndDisposeAllHttpScopedObjects.

		protected void Application_EndRequest(object sender, EventArgs e)
		{
			ObjectFactory.ReleaseAndDisposeAllHttpScopedObjects();
		}

Not too much code in order to get the full power out of StructureMap with ASP.NET MVC.

Tools Used:

  • Visual Studio 2010
  • StructureMap 2.6.3.0
  • ASP.NET MVC3
  • NuGet
  • log4Net 1.2.11.0

Download the Source:

StructureMap MVC3 Demo

Followin Up Reading on the Service Locator Anti-Pattern/Code Smell

I have found that having all projects expanded by default can be annoying, as I tend to open solution files when working in Microsoft Expression Blend.  This often leaves me having to collapse each project individually.  Within Visual Studio, I use PowerCommands for VS2010 and PowerCommands for VS2008 to provide the collapse-all functionality and it works great.

Since Blend 4 uses MEF, I set out on writing an extension to provide this functionality.  I learned how to begin with How to Hack Expression Blend.  The most helpful article I found was Building Extensions for Expression Blend 4 Using MEF by Timmy Kokke.  Following his startup example, I was able to use the debugger and figure out how to interact with Blend’s various parts.  I put forth some effort to have a dropdown menu or button added to the right of the Help entry, but I haven’t finished that exercise yet.  Ideally, I would like to have an entry added to the context menu of the Projects window when right-clicking on a solution or project file.  In the mean time, I have it working with a docking window.

Step 1: Installation

Download (DLL only):

Extract the extension DLL to the folder location “….Program Files (x86)Microsoft ExpressionBlend 4Extensions”.  You may have to right-click on the DLL and click the Unblock button.

Step 2: Using the Extension

After a successful installation, the Window dropdown menu should contain an entry for Collapse All Projects.  As you can see from the screenshot, I have configured the extension to use Ctrl+Shift+C as a shortcut.

Collapse All Projects Menu Entry

Once the menu item is selected, a popup window should appear.  This window is like the Projects, Properties, Assets, etc. windows within Blend, which means you can dock it.  I’ve chosen to dock it to the bottom as shown below.

Collapse All Project Window Docked

Hovering over the window shows the contents.  Simply click the button for the collapse all to be applied.

Collapse All Project Window Expanded

That is it!  If you find any bugs or issues with this, please let me know.  If you get around to making it a context menu item or as a main entry of the top dropdown menu, please share.

Tools Used:

  • Reflector
  • Snoop
  • I wanted to use Mole, but was developing in VS2010.  It doesn’t appear there is a compatible version yet.  Speaking of which, I would really like to see Mole for Silverlight.

Download the Source:

In order build a solution using WCF RIA Services on a build server, a little tweak may be needed to the configuration depending on how your solution has evolved.

Using an example of a Silverlight client project and a server side web application using WCF RIA Services, we quickly identify the problem.  Whenever a change is made to the WCF RIA Services project, Visual Studio will update the Silverlight project and development continues.  On the build machine this isn’t the case, as Visual Studio isn’t used to initiate the build.  This means that the Silverlight project would fail to build.

Even though there is no need to specify the build order when using Visual Studio, the problem with WCF RIA Services on a build server goes away if the Silverlight project is forced to build after the WCF RIA Services project.

Right-click on the solution and select Project Dependencies. 

Solution Context Menu

The following window should appear.  Verify that the Silverlight project has a dependency on the WCF RIA Services project.

 Project Dependencies Window

Requirements:

A Visual Studio 2008 edition that supports Team Foundation Server.

Creating a Build Agent:

Start out by opening the Team Explorer window, which can be done by navigating through the menu View –> Team Explorer.  A list of solutions may appear, but if not, click on the icon highlighted in the image below and select the Team Portal project as appropriate.

Team Explorer Builds

Expand the Builds folder that is one level deep within the Team Portal project.  Upon right clicking on the Builds folder, the following menu will be displayed.  The starting point is Build Agents, so click on Manage Build Agents.

Right Click on Builds Folder

A new window will appear that allows the creation of new and editing of existing build agents.

Manage Build Agents Window

After clicking New, a popup window displays asking for the properties and configuration to create the new build agent.  The default values are shown below.

  1. Fill out a display name for the build agent.  This build agent can be used with multiple build definitions, as will be shown below.
  2. Enter the name of the build server.
  3. Optionally change working directory to be used on the build server.  Using a shared location on the build server can be useful for team members to troubleshoot.

Build Agent Properties

After clicking okay, the newly created build agent should appear in the create and edit screen.

Manage Build Agents Part 2

Creating a Build Definition:

Now that a build agent is created, a build definition needs to be created and associated with the build agent.  In Team Explorer, under the Team Portal directory, right click the builds directory again and click on New Build Definition.

New Build Definition Context Menu

The initial screen shown below is what should be shown, which will allow for entering all of the required information for a build definition.

Build Definition Creation

Enter a verbose name for the build definition.  For the continuous integration build definitions, appending something like “_CI” may be helpful.  If you would like multiple branches to be built separately using continuous integration, appending something like “_Trunk_CI” may be helpful.

Build Definition Name

On the next tab, one or more Source Control Folders may be set up.  The only need for multiple would be in the case that solutions reference other solutions within the TFS structure.  If branching is used, be sure to narrow down the Source Control Folder to the appropriate level.  It may be useful to have all branches handled through a single build definition, but I prefer to have them individually set up.

If you actually select the text within the Source Control Folder, an ellipsis button will appear allowing you to select the appropriate location through a select folder popup.

Build Definition Source Control Folder

On the project file tab, you will see something similar to the following.  It should automatically populate the version control folder using the information previously provided, but it will warn about needing to actually create the file.  Click the create button, which will add the folder TeamBuildTypes at the root level of Team Portal and create the necessary MSBuild files.  Note that these files and folders are added to source control.  The next few screen shots will walk through the creation process for the MSBuild project.

Build Definition Project File

If the Source Control Folder you entered contains more than one solution file, you will see a list of all available solutions to build.  Select all that you want to build as part of this build definition.

MSBuild Project File Solutions

Under the configurations tab, select all of the target configuration you want to build during this build definition.  For example, Release, Debug, Staging, etc. and any Platform combinations that may be appropriate.

MSBuild Project Configurations

Within the Options tab, you can set up which automated tests are to be executed and if code analysis should be run.  Note that in order for these to run on the build server, Visual Studio must be installed, as the MSTest framework and Code Analysis settings are not part of TFS.

MSBuild Project Options

After clicking Finish, we are brought back to the build definition window and the warning should be replaced with the message “Found MSBuild project file”.

Build Definition Version Control Folder

In the source control explorer window, you should now see newly created MSBuild folder directory beneath the top level Team Portal folder.

Source Control Explorer TeamBuildTypes

A list of files within the directory show that the wizard created two files.  The *.rsp is not in a human readable format, but the *.proj file is the MSBuild XML file that may be of interest for customization at a later point.

Detailed TeamBuildTypes

The next tab in the build definition creation is for setting up how long each type of build result should be kept.  By kept, it means storing the full source, test results, code analysis and anything other that may be part of the build process.  The only real concern here is disk space on the build server.  Remember that if you have many solutions using continuous integration on the build server this may become an issue.  It may take an extra step or two, but since the code is always stored in TFS, you can rebuild from any point in history.

Build Definition Retentions

The next tab is how the build definition and previously created build agent are tied together.  Select the build agent from the drop down menu.  The new button will allow you to create a build agent as part of the build definition creation process, but for the sake of this post, I’ve separated the two.  The text entry area is asking for the place to copy the output from the build process.  Generally, I leave it on the build server, but any network location will work.  If you refer back to the folder location I used for the build definition’s working folder, you can see I’ve created a share on the build server called “builds”.  Within the builds directory are two sub-directories, “completed” and “working”.  The working folder is where each build agent executes the assigned build definitions.  The completed folder is where the output from the build definition is copied.  Within each of the sub-directories, I have folders that match the build definition.  The output created from the build definition automatically is contained in a generated folder name that includes a timestamp, so there is no worry of things getting overwritten.

Build Definition Defaults

The last tab in creating a build definition is to specify how the build will be triggered.  You can have it as a manual process, after each check-in or accumulated check-ins, or even specify specific recurrence patterns like a nightly build.

Build Definitions Triggers

Managing Alerts:

From the Team drop down menu, select Project Alerts.

Manage Alerts Team DropDown

The following window will then allow selecting which types of alerts you would like to receive.  By default, the email address and HTML format are already populated.

Project Alerts

For full control over the alerts, go to the Team drop down menu and select Alerts Editor.

Alerts Editor Context Menu

The following tabbed window will open and allow for full customization and creation of alerts.  As shown, you can create combinations using AND and OR criteria in the alerts definition.

Alerts Editor

Testing the new Build Definition and Build Agent:

Going back to the Team Explorer window and within the Team Portal –> Builds directory, right click on the newly created build definition.  Then click Queue New Build.

Queue Build Context Menu

The Queue Build window should appear.  No changes should be required.  Just click “Queue”.

Queue Build Window

After clicking “Queue”, the Build Explorer tabbed window should be open.  This window allows for filtering by build definition, status and agent.

  • Red ‘X’ = Failed
  • White circle with a Green Arrow = In Progress
  • Three white overlapping squares = Queued
  • Green Check = Success

The Build Explorer window has two tabs that can be navigated to at the bottom.  Once builds are finished, the are automatically removed from the Queued tab and moved to the Completed tab.

Build Explorer

By double-clicking on the build line entry, the details will be opened in a new tabbed window.  From here, access to the log can be found through the linked file located in the targeted drop location.  The full BuildLog.txt can be quite large.

Build Details

The Release.txt is usually much smaller and can be found by expanded the Result details and clicking the Release.txt link.

Build Details Part 2 build_34

Thoughts:

A lot of customization can be applied to the build process.  I’ve found the following book very helpful.

Book: Inside the Microsoft Build Engine: Using MSBuild and Team Foundation Build (PRO-Developer) Inside the Microsoft Build Engine: Using MSBuild and Team Foundation Build (PRO-Developer)

If you’re wondering why your Visual Studio may have less options than in some of my screen shots, it may be because I have the Visual Studio Team System 2008 Team Foundation Server Power Tools – October 2008 Release installed.  That install is not required for the purpose of this post.

Requirements:

Microsoft© Team Foundation Server 2008 install bits.

The Process:

Upon starting the setup process, the following screen is shown with a list of options.  For the build server, select Team Foundation Build and click Install.

TFS Install Wizard Start

The next screen is Microsoft asking to record and report any issue with the install experience.  Pick your preference and click Next.

TFS Install Wizard Feedback

Of course, thoroughly read the EULA and if you accept, check the box and click Next.

TFS Install Wizard EULA

Next up is the System Health Check.  If you don’t meet any of the prerequisites, follow the instructions provided.

TFS Install Wizard Progress

The default folder is shown below (on Windows 7 64-bit).

TFS Install Wizard Destination Folder

The Visual Studio Team Foundation Build service will run as a typical Windows Service, which can be found through Control Panel –> Administrative Tools –> Services.  As noted in the screen shot, this should not be a user account.  Create an account specifically for Team Foundation Build and set your password policies as appropriate.  Remember, if the password expires on the account, the service Logon property will need to be updated.  If the account’s password is invalid, all builds will fail.

TFS Install Wizard Service Account

A confirmation screen will show before proceeding.

TFS Install Wizard Summary

After the typical progress bar screen and upon successful installation, the following will be shown.  Be sure to check for any updates and install them as appropriate.

TFS Install Wizard Completion

By going to Control Panel –> Administrative Tools –> Services, the newly installed Visual Studio Team Foundation Build service can be seen.  The default values after install are shown, which have the service start automatically.

Windows Services

This service operates using HTTP, which means it’s dependent on the HTTP service.

Windows Service Dependencies

If your build server will be working with solutions created using the .NET v4 Framework, the following adjustments need to be made.  This also requires installing the .NET v4 Framework on the build server.  Within the file C:Program Files (x86)Microsoft Visual Studio 9.0Common7IDEPrivateAssembliestfsbuildservice.exe.config, adjust the following setting.

MSBuildPath XML Entry

Thoughts:

Remember to keep the build server clean.  The definition of clean should be that only the absolutely required software be installed.  Third party tools should be included in the solution, if possible.  One of the goals of continuous integration is to allow a new team member to join, get latest from source control and start working.  The build server gets a fresh copy from source control every time a build is done to help simulate this process.  I would imagine most people are familiar with the phrase “But it works on my machine” and keeping the build server clean is a great step toward eliminating that issue.

What should be installed then?  Typically, Visual Studio if you’re going to take advantage of Automated Unit Testing and Code Analysis.  Things like the Silverlight tools may be required too, depending on your application.

Useful Links: