I have found that having all projects expanded by default can be annoying, as I tend to open solution files when working in Microsoft Expression Blend.  This often leaves me having to collapse each project individually.  Within Visual Studio, I use PowerCommands for VS2010 and PowerCommands for VS2008 to provide the collapse-all functionality and it works great.

Since Blend 4 uses MEF, I set out on writing an extension to provide this functionality.  I learned how to begin with How to Hack Expression Blend.  The most helpful article I found was Building Extensions for Expression Blend 4 Using MEF by Timmy Kokke.  Following his startup example, I was able to use the debugger and figure out how to interact with Blend’s various parts.  I put forth some effort to have a dropdown menu or button added to the right of the Help entry, but I haven’t finished that exercise yet.  Ideally, I would like to have an entry added to the context menu of the Projects window when right-clicking on a solution or project file.  In the mean time, I have it working with a docking window.

Step 1: Installation

Download (DLL only):

Extract the extension DLL to the folder location “….Program Files (x86)Microsoft ExpressionBlend 4Extensions”.  You may have to right-click on the DLL and click the Unblock button.

Step 2: Using the Extension

After a successful installation, the Window dropdown menu should contain an entry for Collapse All Projects.  As you can see from the screenshot, I have configured the extension to use Ctrl+Shift+C as a shortcut.

Collapse All Projects Menu Entry

Once the menu item is selected, a popup window should appear.  This window is like the Projects, Properties, Assets, etc. windows within Blend, which means you can dock it.  I’ve chosen to dock it to the bottom as shown below.

Collapse All Project Window Docked

Hovering over the window shows the contents.  Simply click the button for the collapse all to be applied.

Collapse All Project Window Expanded

That is it!  If you find any bugs or issues with this, please let me know.  If you get around to making it a context menu item or as a main entry of the top dropdown menu, please share.

Tools Used:

  • Reflector
  • Snoop
  • I wanted to use Mole, but was developing in VS2010.  It doesn’t appear there is a compatible version yet.  Speaking of which, I would really like to see Mole for Silverlight.

Download the Source:

In order build a solution using WCF RIA Services on a build server, a little tweak may be needed to the configuration depending on how your solution has evolved.

Using an example of a Silverlight client project and a server side web application using WCF RIA Services, we quickly identify the problem.  Whenever a change is made to the WCF RIA Services project, Visual Studio will update the Silverlight project and development continues.  On the build machine this isn’t the case, as Visual Studio isn’t used to initiate the build.  This means that the Silverlight project would fail to build.

Even though there is no need to specify the build order when using Visual Studio, the problem with WCF RIA Services on a build server goes away if the Silverlight project is forced to build after the WCF RIA Services project.

Right-click on the solution and select Project Dependencies. 

Solution Context Menu

The following window should appear.  Verify that the Silverlight project has a dependency on the WCF RIA Services project.

 Project Dependencies Window

Requirements:

A Visual Studio 2008 edition that supports Team Foundation Server.

Creating a Build Agent:

Start out by opening the Team Explorer window, which can be done by navigating through the menu View –> Team Explorer.  A list of solutions may appear, but if not, click on the icon highlighted in the image below and select the Team Portal project as appropriate.

Team Explorer Builds

Expand the Builds folder that is one level deep within the Team Portal project.  Upon right clicking on the Builds folder, the following menu will be displayed.  The starting point is Build Agents, so click on Manage Build Agents.

Right Click on Builds Folder

A new window will appear that allows the creation of new and editing of existing build agents.

Manage Build Agents Window

After clicking New, a popup window displays asking for the properties and configuration to create the new build agent.  The default values are shown below.

  1. Fill out a display name for the build agent.  This build agent can be used with multiple build definitions, as will be shown below.
  2. Enter the name of the build server.
  3. Optionally change working directory to be used on the build server.  Using a shared location on the build server can be useful for team members to troubleshoot.

Build Agent Properties

After clicking okay, the newly created build agent should appear in the create and edit screen.

Manage Build Agents Part 2

Creating a Build Definition:

Now that a build agent is created, a build definition needs to be created and associated with the build agent.  In Team Explorer, under the Team Portal directory, right click the builds directory again and click on New Build Definition.

New Build Definition Context Menu

The initial screen shown below is what should be shown, which will allow for entering all of the required information for a build definition.

Build Definition Creation

Enter a verbose name for the build definition.  For the continuous integration build definitions, appending something like “_CI” may be helpful.  If you would like multiple branches to be built separately using continuous integration, appending something like “_Trunk_CI” may be helpful.

Build Definition Name

On the next tab, one or more Source Control Folders may be set up.  The only need for multiple would be in the case that solutions reference other solutions within the TFS structure.  If branching is used, be sure to narrow down the Source Control Folder to the appropriate level.  It may be useful to have all branches handled through a single build definition, but I prefer to have them individually set up.

If you actually select the text within the Source Control Folder, an ellipsis button will appear allowing you to select the appropriate location through a select folder popup.

Build Definition Source Control Folder

On the project file tab, you will see something similar to the following.  It should automatically populate the version control folder using the information previously provided, but it will warn about needing to actually create the file.  Click the create button, which will add the folder TeamBuildTypes at the root level of Team Portal and create the necessary MSBuild files.  Note that these files and folders are added to source control.  The next few screen shots will walk through the creation process for the MSBuild project.

Build Definition Project File

If the Source Control Folder you entered contains more than one solution file, you will see a list of all available solutions to build.  Select all that you want to build as part of this build definition.

MSBuild Project File Solutions

Under the configurations tab, select all of the target configuration you want to build during this build definition.  For example, Release, Debug, Staging, etc. and any Platform combinations that may be appropriate.

MSBuild Project Configurations

Within the Options tab, you can set up which automated tests are to be executed and if code analysis should be run.  Note that in order for these to run on the build server, Visual Studio must be installed, as the MSTest framework and Code Analysis settings are not part of TFS.

MSBuild Project Options

After clicking Finish, we are brought back to the build definition window and the warning should be replaced with the message “Found MSBuild project file”.

Build Definition Version Control Folder

In the source control explorer window, you should now see newly created MSBuild folder directory beneath the top level Team Portal folder.

Source Control Explorer TeamBuildTypes

A list of files within the directory show that the wizard created two files.  The *.rsp is not in a human readable format, but the *.proj file is the MSBuild XML file that may be of interest for customization at a later point.

Detailed TeamBuildTypes

The next tab in the build definition creation is for setting up how long each type of build result should be kept.  By kept, it means storing the full source, test results, code analysis and anything other that may be part of the build process.  The only real concern here is disk space on the build server.  Remember that if you have many solutions using continuous integration on the build server this may become an issue.  It may take an extra step or two, but since the code is always stored in TFS, you can rebuild from any point in history.

Build Definition Retentions

The next tab is how the build definition and previously created build agent are tied together.  Select the build agent from the drop down menu.  The new button will allow you to create a build agent as part of the build definition creation process, but for the sake of this post, I’ve separated the two.  The text entry area is asking for the place to copy the output from the build process.  Generally, I leave it on the build server, but any network location will work.  If you refer back to the folder location I used for the build definition’s working folder, you can see I’ve created a share on the build server called “builds”.  Within the builds directory are two sub-directories, “completed” and “working”.  The working folder is where each build agent executes the assigned build definitions.  The completed folder is where the output from the build definition is copied.  Within each of the sub-directories, I have folders that match the build definition.  The output created from the build definition automatically is contained in a generated folder name that includes a timestamp, so there is no worry of things getting overwritten.

Build Definition Defaults

The last tab in creating a build definition is to specify how the build will be triggered.  You can have it as a manual process, after each check-in or accumulated check-ins, or even specify specific recurrence patterns like a nightly build.

Build Definitions Triggers

Managing Alerts:

From the Team drop down menu, select Project Alerts.

Manage Alerts Team DropDown

The following window will then allow selecting which types of alerts you would like to receive.  By default, the email address and HTML format are already populated.

Project Alerts

For full control over the alerts, go to the Team drop down menu and select Alerts Editor.

Alerts Editor Context Menu

The following tabbed window will open and allow for full customization and creation of alerts.  As shown, you can create combinations using AND and OR criteria in the alerts definition.

Alerts Editor

Testing the new Build Definition and Build Agent:

Going back to the Team Explorer window and within the Team Portal –> Builds directory, right click on the newly created build definition.  Then click Queue New Build.

Queue Build Context Menu

The Queue Build window should appear.  No changes should be required.  Just click “Queue”.

Queue Build Window

After clicking “Queue”, the Build Explorer tabbed window should be open.  This window allows for filtering by build definition, status and agent.

  • Red ‘X’ = Failed
  • White circle with a Green Arrow = In Progress
  • Three white overlapping squares = Queued
  • Green Check = Success

The Build Explorer window has two tabs that can be navigated to at the bottom.  Once builds are finished, the are automatically removed from the Queued tab and moved to the Completed tab.

Build Explorer

By double-clicking on the build line entry, the details will be opened in a new tabbed window.  From here, access to the log can be found through the linked file located in the targeted drop location.  The full BuildLog.txt can be quite large.

Build Details

The Release.txt is usually much smaller and can be found by expanded the Result details and clicking the Release.txt link.

Build Details Part 2 build_34

Thoughts:

A lot of customization can be applied to the build process.  I’ve found the following book very helpful.

Book: Inside the Microsoft Build Engine: Using MSBuild and Team Foundation Build (PRO-Developer) Inside the Microsoft Build Engine: Using MSBuild and Team Foundation Build (PRO-Developer)

If you’re wondering why your Visual Studio may have less options than in some of my screen shots, it may be because I have the Visual Studio Team System 2008 Team Foundation Server Power Tools – October 2008 Release installed.  That install is not required for the purpose of this post.

Requirements:

Microsoft© Team Foundation Server 2008 install bits.

The Process:

Upon starting the setup process, the following screen is shown with a list of options.  For the build server, select Team Foundation Build and click Install.

TFS Install Wizard Start

The next screen is Microsoft asking to record and report any issue with the install experience.  Pick your preference and click Next.

TFS Install Wizard Feedback

Of course, thoroughly read the EULA and if you accept, check the box and click Next.

TFS Install Wizard EULA

Next up is the System Health Check.  If you don’t meet any of the prerequisites, follow the instructions provided.

TFS Install Wizard Progress

The default folder is shown below (on Windows 7 64-bit).

TFS Install Wizard Destination Folder

The Visual Studio Team Foundation Build service will run as a typical Windows Service, which can be found through Control Panel –> Administrative Tools –> Services.  As noted in the screen shot, this should not be a user account.  Create an account specifically for Team Foundation Build and set your password policies as appropriate.  Remember, if the password expires on the account, the service Logon property will need to be updated.  If the account’s password is invalid, all builds will fail.

TFS Install Wizard Service Account

A confirmation screen will show before proceeding.

TFS Install Wizard Summary

After the typical progress bar screen and upon successful installation, the following will be shown.  Be sure to check for any updates and install them as appropriate.

TFS Install Wizard Completion

By going to Control Panel –> Administrative Tools –> Services, the newly installed Visual Studio Team Foundation Build service can be seen.  The default values after install are shown, which have the service start automatically.

Windows Services

This service operates using HTTP, which means it’s dependent on the HTTP service.

Windows Service Dependencies

If your build server will be working with solutions created using the .NET v4 Framework, the following adjustments need to be made.  This also requires installing the .NET v4 Framework on the build server.  Within the file C:Program Files (x86)Microsoft Visual Studio 9.0Common7IDEPrivateAssembliestfsbuildservice.exe.config, adjust the following setting.

MSBuildPath XML Entry

Thoughts:

Remember to keep the build server clean.  The definition of clean should be that only the absolutely required software be installed.  Third party tools should be included in the solution, if possible.  One of the goals of continuous integration is to allow a new team member to join, get latest from source control and start working.  The build server gets a fresh copy from source control every time a build is done to help simulate this process.  I would imagine most people are familiar with the phrase “But it works on my machine” and keeping the build server clean is a great step toward eliminating that issue.

What should be installed then?  Typically, Visual Studio if you’re going to take advantage of Automated Unit Testing and Code Analysis.  Things like the Silverlight tools may be required too, depending on your application.

Useful Links:

In order to share code, such as Data Transfer Objects (DTOs), there are a number of ways to address the situation.  First, I’ll address a way that only works in C# and then I’ll describe a method that works with C# and VB.NET.  WCF RIA Services has a goal of sharing code between the client and server, but this post is for those situations when WCF RIA Services isn’t used.

The scenario being addressed has a Silverlight application and a ASP.NET web application.  The issue is sharing objects between them.  Using Windows Communication Foundation (WCF) for services that the Silverlight application will call, objects are exposed through the service contract, but I would like to use the Message Gateway pattern to abstract away the service calls within the client.

Using a single class library to share code (C# only)

If you create a new project using the template for Silverlight Class Library, you will be able to reference that class library in both the Silverlight application and the ASP.NET web application.  From within the service layer in the web application, you can create and use any objects that cross the application boundary in the Silverlight class library.

When creating a service reference in the Silverlight application, point it at the WCF service you have just created and make sure to select “Reuse All Application Assemblies” or select the individual namespaces that you have created within the Silverlight class library.  You will then need to reference the Silverlight class library assembly in the Silverlight application.

Silverlight Class Library

Using two class libraries to share code (C# and VB.NET)

If you’re using VB.NET the method above does not work, as the compiler will complain about different runtime versions.  The cause of this may not be the compiler itself, but rather something with the VB.NET runtime.  Regardless, the next method described will work, but it takes a little more work.

Given the Silverlight application and the ASP.NET web application, you will need to create two more projects.  One will be a Silverlight Class Library and the other will be a traditional .NET Class Library.  You will create the objects within only one of the class libraries.  You can then use the “Add As link” option to include the file within the other class library.  What this does is provide two assemblies with the same code, but compiled for different runtime versions.  You are not technically duplicating code in the copy and paste sense, but you are producing duplicate copies of code from a single source.

Add As Link

Model-View-ViewModel

I like to use the Model-View-ViewModel (MVVM) pattern in Silverlight development and implementing it with these approaches should be addressed.

The first method that works only in C# does not work as well with MVVM because implementing INotifyPropertyChanged for Silverlight and Windows Presentation Foundation (WPF) causes namespace clashes.  The non-Silverlight class library requires using the WPF definition of INotifyPropertyChanged.  The different runtime environments do not allow this to compile correctly.

When using the second method, the two resulting projects are compiled separately for the different runtimes and therefore work in both Silverlight and WPF.

A third scenario works when using MVVM and the first method, but the implementation of INotifyPropertyChanged needs to be moved from the Model to the ViewModel.  This works well for every scenario I’ve seen except when using an ObservableCollection bound to an editable DataGrid.  This is because the INotifyPropertyChanged events are fired when the list data structure changes, not when an item within the list changes.  Since the items within the list, which are coming directly from the Model, don’t implement INotifyPropertyChanged, it creates a dilemma when handling row saves.  Specifically, reloading the whole grid after a single row save is not an ideal user experience (UX).  When I’ve come across this problem, I’ve used the second method.  There may be some way around this with creating a new List type that implements INotifyPropertyChanged and INotifyCollectionChanged in which it specifically handles the special case.  That seems like reinventing the wheel and resulting from a snowball effect of taking the quick way of sharing data, which in short means the code smells!

Keeping the models separate

I should outline a more proper approach of using Silverlight, Services and MVVM.  Let us start with the WCF service layer and below.  This area of the overall application should have full access to the domain objects, which will be referred to as Model-A.  Model-A would typically be created using ADO.NET Entity Framework, LINQ to SQL, or whatever Object-Relational Mapping (ORM) tool you prefer.  As part of handling service calls, your service layer would also perform mapping to DTOs, which will be referred to as Model-B.  The DTOs, by definition, are the objects that are transferred over the application boundary.  Now for the client-side (Silverlight) that will consume the services.  Silverlight will have service references and will receive DTOs as part of the communication with the services.  If the message gateway pattern is used, that would be an ideal place to then convert the DTOs to a model that can be used in the MVVM pattern.  This client-side only model will be referred to as Model-C.  Model-C is the place to implement INotifyPropertyChanged so that you can take advantage of the advanced XAML binding.  Using these three models, you are clearly separating concerns and keeping each model pertinent to its application module.  Specifically, you are not letting user interface code influence anything outside of the UI.  You’re not restricting consumers of the services by adding UI or persistence code to the DTOs and you’re keeping the persistence code in the same place that queries are performed.

Three Models

What about WCF RIA Services

I mentioned that these approaches are for when WCF RIA Services aren’t being used, but what does WCF RIA Services do anyway?  Currently, in short, it creates a client-side set of classes based on the server-side set of classes and then adds them to the Silverlight project during compilation.  It does offer some great things, so consider using it before trying to reinvent the wheel.  At the time of creating this post, the WCF RIA Services project is in beta for Visual Studio 2008 and preview for Visual Studio 2010, so it’s subject to change.

If you have found other approaches or complications with the methods above, please let me know!