Android Plugin Application

For one of the project I am working at the moment I have the need to implement a plug-in architecture where the main application is just a “view holder” and all the behaviors of the application are provided by a set of plug-ins.

The advantage of this approach is that in the future I will not need to re-distribute my entire framework over the marketplace but simply release new plug-ins that the customer can add or update the existing one.

Note: the code presented in this article is not optimized and its only purpose is to explain one of the possible solutions to implement a custom plug-in architecture in Android. The methods exposed are not optimized and do not use an asynchronous pattern so I suggest to refactor them before adopting this code into your real applications

What is a plug-in architecture?

Before deep diving into the code I want to take a moment and explain what I personally mean for plug-in architecture, using the following diagram:

The previous picture represents a description of the JPF, the Java Plugin Framework, which is a similar solution to the one we are going to implement. The architecture is primarily composed by two different components.

One component is the main application, the agnostic framework capable to load plug-ins. The second component is the plug-in registry, a repository that inform the system about the available plug-ins installed into the system.

In Android we have tons of different ways to implement a plug-in architecture. For example we can have a plug-in composed by a .JAR package that contains activities and code related to the plug-in. But I found out that in Android the best way to package things is to use the .apk system. With an .apk I can include Activities, Fragments, resources and layouts like a standalone application with the advantage of using some sort of contracts to force the code to be in a certain way.

Retrieve available packages

The basic project is a simple Android Application with a main activity. The main activity contains a ListView that will display all the available .apk that we can consider a plug-in for our application.

But first of all, let’s see how we can retrieve a list of installed .apk using some basic Android APIs. First of all I need to create a custom ListView item that can be used to display the information relative to a package. And this is the custom class:

package ltd.netarchitectures.na_plugins;

import android.graphics.drawable.Drawable;

public class ApplicationDetail {

    private CharSequence label;
    private CharSequence name;
    private Drawable icon;

    public ApplicationDetail(CharSequence label, CharSequence name, Drawable icon) {
        this.label = label;
        this.name = name;
        this.icon = icon;
    }

    public CharSequence getLabel() {
        return label;
    }

    public CharSequence getName() {
        return name;
    }

    public Drawable getIcon() {
        return icon;
    }
}

So with this custom class we can represents an available package. For now the information exposed are enough but we can retrieve more information like the company who made the package, the size of the package, the version and so on. We can also expose the package to see how many Fragments or Activities are available. Again, here the limit is your imagination.

Now we need to fetch all available packages. Because we do not have any plug-in available yet, let’s see how we can fetch all the available applications just to start to have a look at the android API.

    private void loadApplication(){
        // package manager is used to retrieve the system's packages 
        packageManager = getPackageManager();
        applications = new ArrayList<ApplicationDetail>();
        // we need an intent that will be used to load the packages
        Intent intent = new Intent(Intent.ACTION_MAIN, null);
        // in this case we want to load all packages available in the launcher
        intent.addCategory(Intent.CATEGORY_LAUNCHER);
        List<ResolveInfo> availableActivities = packageManager.queryIntentActivities(intent,0);
        // for each one we create a custom list view item
        for(ResolveInfo resolveInfo:availableActivities){
            ApplicationDetail applicationDetail = new ApplicationDetail(
                    resolveInfo.loadLabel(packageManager),
                    resolveInfo.activityInfo.packageName,
                    resolveInfo.activityInfo.loadIcon(packageManager));
            applications.add(applicationDetail);
        }
    }

At the end (I skip the code to create a custom list view item cause it should be quite easy to implement, anyway you can find it in the source code of this article) we will have a list view populated with all available applications. In my case in alphabetical order:

image

Create a plug-in

In order to not loose any of the advantages of the android application package we want to distribute our plug-ins as standalone packages but we don’t want that the user is capable to execute the packages as stand alone packages.

First of all, I want to make a little explanation of how android works and how each package is treated by the underlying Linux system:

  • The Android operating system is a multi-user Linux system in which each app is a different user.
  • By default, the system assigns each app a unique Linux user ID (the ID is used only by the system and is unknown to the app). The system sets permissions for all the files in an app so that only the user ID assigned to that app can access them.
  • Each process has its own virtual machine (VM), so an app’s code runs in isolation from other apps.
  • By default, every app runs in its own Linux process. Android starts the process when any of the app’s components need to be executed, then shuts down the process when it’s no longer needed or when the system must recover memory for other apps.

All right, so first of all we need to create a new Android project and set the project to run in background. The project will have its own icon and activities and bla bla but it cannot be shown into the home launcher.

       <activity
            android:name=".MainActivity"
            android:label="@string/app_name" >
            <intent-filter>
                <action android:name="android.intent.action.MAIN" />
                <!--
                <category android:name="android.intent.category.LAUNCHER" />
                -->
            </intent-filter>
        </activity>

And then the application is installed into our system, but is not visible in the launcher like the following screenshot:

image

The next step is to share a custom INTENT between my plug-in application and my main application and set the category of the plug-in to this custom intent. In this way my list view will be populated only with the list of available plugins.

If you pay attention to the Android manifest the INTENT is nothing more than a custom string, so in my plugin manifest I will change the intent in the following one:

<activity
    android:name=".MainActivity"
    android:label="@string/app_name" >
    <intent-filter>
        <action android:name="android.intent.action.MAIN" />
        <category android:name="ltd.netarchitectures.PLUGIN" />
    </intent-filter>
</activity>

And inside my ListView adapter I will load the activities that only implement my INTENT like this one:

packageManager = getPackageManager();
applications = new ArrayList<>();
Intent intent = new Intent(Intent.ACTION_MAIN, null);
intent.addCategory("ltd.netarchitectures.PLUGIN");

And now when I start my main application only my plugins will be loaded:

image

Easy, isn’t it? In the next article I will explain how we can execute this plugin within the same thread (Linux VM) of the main activity and how we can control when a new plug-in is installed into the system.

Android and the transparent status bar

With the introduction of Google Material Design we also got a new status bar design and we can choose for three different layouts:

  • Leave the status bar as is (usually black background and white foreground)
  • Change the color of the status bar using a different tone
  • Make the status bar transparent

The picture below shows the three different solutions provided with the Material Design guidelines:

A) Classic Status Bar

B) Colored Status Bar

C) Transparent Status Bar

In this post I want to finally give a working solution that allows you to achieve all this variations of the status. Except for the first solution which is the default layout of Android, so if you don’t want to comply to the Material Design Guidelines just leave the status bar black colored.

Change the Color of the StatusBar

So the first solution we want to try here is to change the color of the status bar. I have a main layout with a Toolbar component in it and the Toolbar component has a background color like the following:

image

So according Material Design my Status Bar should be colored using the following 700 tone variation:

image

If you are working with Material Design only and Android Lollipop this is quite easy to accomplish, just set the proper attribute inside the Material Theme Style(v21) XML file as following:

<!-- This is the color of the Toolbar -->
<item name="colorPrimary">@color/primary</item>
<!-- This is the color of the Status bar -->
<item name="colorPrimaryDark">@color/primary_dark</item>
<!-- The Color of the Status bar -->
<item name="statusBarColor">@color/primary_dark</item>

Unfortunately this solutions does not make your status bar transparent, so if you have a Navigation Drawer the final result will look a bit odd compared to the real Material Design guidelines, like the following one:

image

As you can see the Status Bar simply covers the Navigation Drawer giving a final odd layout. But with this simple solution you can change your status bar color but only for Lollipop systems.

In Android KitKat you cannot change the color of the status bar except if you use the following solution because only in Lollipop Google introduced the attribute statuBarColor

Make the StatusBar transparent

A second solution is to make the Status Bar transparent. This is easy to achieve by using the following XML attributes in your Styles.xml and Styles(v21).xml:

    <!-- Make the status bar traslucent -->
    <style name="AppTheme" parent="AppTheme.Base">
        <item name="android:windowTranslucentStatus">true</item>
    </style>

But with only this solution in place we get another odd result where the Toolbar moves behind the status bar and get cut like the following screenshot:

image

So first of all we need to inform the Activity that we need to add some padding to our toolbar and the padding should be the size of the status bar, which is completely different from one device to another. So how can we achieve that is quite simple. First we get the status bar height with this function:

// A method to find height of the status bar
public int getStatusBarHeight() {
    int result = 0;
    int resourceId = getResources().getIdentifier("status_bar_height", "dimen", "android");
    if (resourceId > 0) {
        result = getResources().getDimensionPixelSize(resourceId);
    }
    return result;
}

Then in our OnCreate method we specify the padding of the Toolbar with the following code:

@Override
protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_drawer);

   // Retrieve the AppCompact Toolbar
    Toolbar toolbar = (Toolbar) findViewById(R.id.toolbar);
    setSupportActionBar(toolbar);

   // Set the padding to match the Status Bar height
    toolbar.setPadding(0, getStatusBarHeight(), 0, 0);
}

And finally we can see that the Status Bar is transparent and that our Toolbar has the right padding. Unfortunately the behavior between Lollipop and KitKat is totally different. In Lollipop the system draw a translucency of 20% while KitKat does not draw anything and the final result between the two systems is completely different:

image

So, in order to get the final result looking the same on both systems we need to use a nice library called Android System Bar Tint available here on GitHub: https://github.com/jgilfelt/SystemBarTint. This library is capable of re-tinting the Status Bar with the color we need and we can also specify a level of transparency. So, because the default Material Design Status Bar should be 20% darker than the Toolbar color we can also say that the Status Bar should be 20% black, which correspond to the following RGB color: #20000000. (But you can also provide a darker color and play with transparency, this is really up to you).

So, going back to our onCreate method, after we setup the padding top for the Toolbar we can change the color of the Status Bar using the following code:

// create our manager instance after the content view is set
SystemBarTintManager tintManager = new SystemBarTintManager(this);
// enable status bar tint
tintManager.setStatusBarTintEnabled(true);
// enable navigation bar tint
tintManager.setNavigationBarTintEnabled(true);
// set the transparent color of the status bar, 20% darker
tintManager.setTintColor(Color.parseColor("#20000000"));

At this point if we test again our application, the final result is pretty nice and also the overlap of the Navigation Drawer is exactly how is supposed to be in the Material Design Guidelines:

image

The Next Video shows the final results running on KitKat and Lollipop device emulators using Genymotion.

The Final result on Lollipop and KitKat

Understand Density Independent Pixels (DPI)

If you are working on a Mobile application (using mobile CSS, native Android SDK or native iOS SDK) the first problem you are going to face is the difference between the various screen sizes. For example, if you work with Android you will notice that different devices have different screen resolutions. Android categorize these devices in 4 different buckets called respectively MDPI, HDPI, XHDPI and XXHDPI

As I usually say, a picture is worth a thousands words:

Figure 1 So as you can see, in this case we have 4 devices with 4 different pixels resolutions but also 4 different DPI classifications.

What is DPI and why we should care?

DPI stands for Dots per Inches, which can be translated in how many pixels can be drawn into a screen for a given inch of screen’s space.

This measure is totally unbind to the screen size and to the pixel resolution so we can state that screens at different size and different resolution may be classified within the same DPI category and screens with same size but different resolution may be classified into different DPI category.

Assuming we are loading on our phone a raster picture of XX px wide, this is the result we will see using different DPI if we keep the image at the same size:

image

The blurring effect is caused by the fact that on a screen with 165dpi the amount of pixels drawn per inch is way lower (165) than on a 450dpi screen so the first thing that we loose is the sharpness of the image.

How Android works with DPI?

In android you can classify your device’s screens into 4 or more different dpi buckets which are used to classify the device’s screen depending on the amount of dpi and not the pixel resolution of the screen size. The picture below shows the available DPI classifications with a sample device for each category. You can find all the available DPI classification on this lovely website DPI Love.

image

So for Android specifically, a device of 160 DPI has a ratio of 1:1 with the pixels on the screen while a device with more than 480 DPI has a ratio of 1:3 pixels on the screen compared to the same design for a 160 DPI screen.

Based on this classification we can now easily derive the following formula which can be used to calculate the real DPI resolution of a device based on its DPI classification and pixels resolution:

image

The Formula can be translated as DP = PX * 160 / DPI. So let’s make a couple of examples.

We want to design on the screen a square that should be 200px * 50px on our MDPI screen which we are using for mocking the UI (this is what I call default viewport)

Note: in Android SDK you will refer to DP  to define a density independent measure and not DPI, this is why the previous formula has on the left side px (pixels) and dp (density independent pixels).

Considering the previous list of devices (Figure 1) this is the result I come with in order to have the same aspect ration of my rectangle over multiple devices starting from an MDPI viewport:

DEVICE DPI RATIO SIZE DP SIZE
GALAXY ACE MDPI 1:1 200 * 50 px 200 * 50 dp
HTC DESIRE HDPI 1:1.5 200 * 50 px 133 * 33 dp
NEXUS 7 XHDPI 1:2 200 * 50 px 100 * 25 dp
NEXUS 6 XXHDPI 1:3 200 * 50 px 67 * 16 dp

Regarding iOS the ratio is exactly the same except for XHDPI (retina) where the ratio is 1:2.25 and not 1:3 like in Android. iOS does not offer a classification for XXHDPI devices.

Entity Framework 6 and Collections With DDD

If you start to work with Entity Framework 6 and a real Domain modeled following the SOLID principles and most common known rules of DDD (Domain Driven Design) you will also start to clash with some limits imposed by this ORM.

Let’s start with a classic example of a normal Entity that we define as UserRootAggregate. For this root aggregate we have defined some business rules as following:

  1. A User Entity is a root aggregate
  2. A User Entity can hold 0 or infinite amount of UserSettings objects
  3. A UserSetting can be created only within the context of a User root aggregate
  4. A UserSetting can be modified or deleted only within the context of a User root aggregate
  5. A UserSetting hold a reference to a parent User

Based on this normal DDD principles I will create the two following objects:

A User Entity is a root aggregate

/// My Root Aggregate
public class User : IRootAggregate
{
   public Guid Id { get; set; }

   /// A root aggregate can be created
   public User() {  }
}

A User Entity can hold 0 or infinite amount of UserSettings

public class User : IRootAggregate
{
   public Guid Id { get; set; }
   
   public virtual ICollection<UserSetting> Settings { get; set; }

   public User()
   {
      this.Settings = new HashSet<Setting>();
   }
}

A UserSetting can be created or modified or deleted only within the context of a User root aggregate

    public class UserSetting
    {
       public Guid Id { get; set; }
       public string Value { get; set; }
       public User User { get; set; }
    
       internal UserSetting(User user, string value)
       {
          this.Value = value;
          this.User = user;
       }
    }
    
    /// inside the User class
    public void CreateSetting(string value)
    {
       var setting = new UserSetting (this, value);
       this.Settings.Add(setting)
    }
    
    public void ModifySetting(Guid id, string value)
    {
       var setting = this.Settings.First(x => x.Id == id);
       setting.Value = value;
    }
    
    public void DeleteSetting(Guid id)
    {
       var setting = this.Settings.First(x => x.Id == id);
       this.Settings.Remove(setting);
    }

So far so good, Now, considering that we have a Foreign Key between the table UserSetting and the table User we can easily map the relationship with this class:

public class PersonSettingMap : EntityTypeConfiguration<PersonSetting>
{
   public PersonSettingMap()
   {
       HasRequired(x => x.User)
           .WithMany(x => x.Settings)
           .Map(cfg => cfg.MapKey("UserID"))
           .WillCascadeOnDelete(true);
   }
}

Now below I want to show you the strange behavior of Entity Framework 6.

If you Add a child object and save the context Entity Framework will properly generate the INSERT statement:

using (DbContext context = new DbContext)
{
   var user = context.Set<User>().First();
   user.CreateSetting("my value");

   context.SaveChanges();
}

If you try to UPDATE a child object, again EF is smart enough and will do the same UPDATE statement you would like to get issued:

using (DbContext context = new DbContext)
{
   var user = context.Set<User>()
                     .Include(x => x.Settings).First();
   var setting = user.Settings.First();
   setting.Value = "new value";

   context.SaveChanges();
}

The problem occurs with the DELETE. Actually you would issue this C# statement and think that Entity Framework like any other ORM does already, will be smart enough to issue the DELETE statement …

using (DbContext context = new DbContext)
{
   var user = context.Set<User>()
                     .Include(x => x.Settings).First();
   var setting = user.Settings.First();
   user.DeleteSetting(setting.Id);

   context.SaveChanges();
}

But you will get a nice Exception has below:

System.Data.Entity.Infrastructure.DbUpdateException:

An error occurred while saving entities that do not expose foreign key properties for their relationships.

The EntityEntries property will return null because a single entity cannot be identified as the source of the exception.

Handling of exceptions while saving can be made easier by exposing foreign key properties in your entity types.

See the InnerException for details. —>

System.Data.Entity.Core.UpdateException: A relationship from the ‘UserSetting_User’ AssociationSet is in the ‘Deleted’ state.

Given multiplicity constraints, a corresponding ‘UserSetting_User_Source’ must also in the ‘Deleted’ state.

So this means that EF does not understand that we want to delete the Child object. So inside the scope of our Database Context we have to do this:

using (DbContext context = new DbContext)
{
   var user = context.Set<User>()
                     .Include(x => x.Settings).First();
   var setting = user.Settings.First();
   user.DeleteSetting(setting);

   // inform EF
   context.Entry(setting.Id).State = EntityState.Deleted;

   context.SaveChanges();
}

I have searched a lot about this problem and actually you can read from the Entity Framework team that this is a feature that is still not available for the product:

http://data.uservoice.com/forums/72025-entity-framework-feature-suggestions/suggestions/1263145-delete-orphans-support

NHibernate Fetch strategies

In this blog post I want to illustrate how we can eager load child and parent objects inside memory using NHibernate and how to avoid the nasty problem of creating Cartesian products. I will show you how this can be achieved using the three different type of Query pattern implemented inside NHibernate.

For this example I am using the version 3.3 of NHibernate against a SQLite database to have some quick “in memory” tests.

The Domain Model

My model is quite straighforward, is composed by a Person entity and two child collections, Address and Phone, like illustrated in the following picture:

image

For the Id I am using a System.Guid data type, for the collections I am using an IList<T> and the mapping is achieved using <Bag> with the inverse=”true” attribute. I don’t write the remaining mapping for simplicity.

<class name="Person" abstract="false" table="Person">
  <id name="Id">
    <generator class="guid.comb" />
  </id>

  <property name="FirstName" />
  <property name="LastName" />

  <bag name="Addresses" inverse="true" table="Address" cascade="all">
    <key column="PersonId" />
    <one-to-many class="Address"/>
  </bag>

  <bag name="Phones" inverse="true" table="Phone" cascade="all">
    <key column="PersonId" />
    <one-to-many class="Phone"/>
  </bag>
</class>

NHibernate Linq

With the Linq extension for NHibernate, I can easily eager load the two child collections using the following syntax:

Person expectedPerson = session.Query<Person>()
    .FetchMany(p => p.Phones)
        .ThenFetch(p => p.PhoneType)
    .FetchMany(p => p.Addresses)
    .Where(x => x.Id == person.Id)
    .ToList().First();

The problem of this query is that I will receive a nasty Cartesian product. Why? Well let’s have a look at the SQL generated by this Linq using NHibernate  profiler:

image

In my case I have 2 Phone records and 1 Address record that belong to the parent Person. If I have a look at the statistics I can see that the total number of rows is wrong:

image

Unfortunately, if I write the following test, it passes, which means that my Root Aggregate entity is wrongly loaded:

// wrong because address is only 1
expectedPerson.Addresses
   .Count.Should().Be(2, "There is only one address");
expectedPerson.Phones
   .Count.Should().Be(2, "There are two phones");

The solution is to batch the collections into two different query, without affecting too much the Database performances. In order to achieve this goal I have to use the Future syntax and tell to NHibernate to build a Root Aggregate with three database batch calls:

// create the first query
var query = session.Query<Person>()
      .Where(x => x.Id == person.Id);
// batch the collections
query
   .FetchMany(x => x.Addresses)
   .ToFuture();
query
   .FetchMany(x => x.Phones)
   .ThenFetch(p => p.PhoneType)
   .ToFuture();
// execute the queries in one roundtrip
Person expectedPerson = query.ToFuture().ToList().First();

Now if I profile my query, I can see that the entities loaded are loaded using 3 SQL queries but batched together into one single database call:

image

Regarding the performances, this is the difference between a Cartesian product and a Batch call:

image

NHibernate QueryOver

The same mechanism is available also for the QueryOver<T> component, we can instruct NHibernate to create a left outer join, and get a Cartesian product, like the following statement:

Phone phone = null;
PhoneType phoneType = null;
// One query
Person expectedPerson = session.QueryOver<Person>()
    // Inner Join
    .Fetch(p => p.Addresses).Eager
    // left outer join
    .Left.JoinAlias(p => p.Phones, () => phone)
    .Left.JoinAlias(() => phone.PhoneType, () => phoneType)
    .Where(x => x.Id == person.Id)
    .TransformUsing(Transformers.DistinctRootEntity)
    .List().First();

As you can see here I am trying to apply the transformer DistinctRootEntity, but unfortunately the transformer does not work if you eager load more than 1 child collection, because the Database returns more than 1 instance of the same Root Aggregate.

Also in this case, the alternative is to Batch the collections and send 3 queries to the Database in one round trip:

Phone phone = null;
PhoneType phoneType = null;
// prepare the query
var query = session.QueryOver<Person>()
    .Where(x => x.Id == person.Id)
    .Future();
// eager load in one batch the first collection
session.QueryOver<Person>()
    .Fetch(x => x.Addresses).Eager
    .Future();
// second collection with grandchildren
session.QueryOver<Person>()
    .Left.JoinAlias(p => p.Phones, () => phone)
    .Left.JoinAlias(() => phone.PhoneType, () => phoneType)
    .Future();
// execute the query
Person expectedPerson = query.ToList().First();

Personally, the only thing that I don’t like about QueryOver<T> is the syntax, as you can see from my complex query I need to create some empty pointers to the object Phone and PhoneType. I don’t like it because when I batch 3,  4 collections I always come up with 3, 4 variables that are quite ugly and useless.

NHibernate HQL

HQL is a great query language, it allows you to really write any kind of complex query and the biggest advantage, compared to Linq or QueryOver<T> is the fully support by the framework.

The only downside is that it requires “magic strings”, so you must be very careful on what query you write because it is very easy to write wrong queries and get a nice runtime exception.

So, also in this case, I can eager load everything in one shot, and get again a Cartesian product:

Person expectedPerson =
    session.CreateQuery(@"
    from Person p 
    left join fetch p.Addresses a 
    left join fetch p.Phones ph 
    left join fetch ph.PhoneType pt
    where p.Id = :id")
        .SetParameter("id", person.Id)
        .List<Person>().First();

Or batch 3 different HQL query in one Database call:

// prepare the query
var query = session.CreateQuery("from Person p where p.Id = :id")
        .SetParameter("id", person.Id)
        .Future<Person>();
// eager load first collection
session.CreateQuery("from Person p 
                     left join fetch p.Addresses a where p.Id = :id")
        .SetParameter("id", person.Id)
        .Future<Person>();
// eager load second collection
session.CreateQuery("from Person p
                     left join fetch p.Phones ph 
                     left join fetch ph.PhoneType pt where p.Id = :id")
        .SetParameter("id", person.Id)
        .Future<Person>();

Eager Load vs Batch

Actually I run some tests in order to understand if the performances are better by:

  • Running an eager query and clean manually the duplicated records
  • Run a batch set of queries and get a clean Root Aggregate

These are my results:

image

And surprisingly the eager load + C# cleanup is slower than the batch call. Smile

Configure MTM 2013 to run automated tests

The Scenario

I have an MTM 2013 installation that is configured in the following way:

image

This is the workflow that is triggered when a Developer check-in something:

  1. The code is built by TFS 2013, using a TFS Build Agent
  2. The agent update a Nuget Package containing the deployed application
  3. Octopus release the package over our Staging environment
  4. MTM execute remote tests after the Build is complete

Configuring MTM 2013

In order to have a successful and pleasant experience with MTM 2013 we need to pre-configure in the proper way the test environment(s). If you don’t configure properly the Test machines, the Environments and/or the Test Cases you will have a lot of troubleshooting activities in your backlog … MTM is quite articulated.

The time I am writing this article is April 2014 and MTM came out a while ago, so after you install it you may face some missing values in the operating systems or in the browsers list. So, first of all, let’s update these value lists.

Open MTM and Choose Testing Center>Test Configuration Manager>Manage configuration variables. In my case I extended the values in the following way:

image

You can also go directly to the source and change the XML entries. In order to change the correct file I would suggest you to visit this useful MSDN page:
http://msdn.microsoft.com/en-us/library/ms243856.aspx

Now that I have my value lists updated I can start with the configuration process. I have highlighted below the steps you should follow in order to have a proper MTM configuration.

  1. Define the Environment
    http://msdn.microsoft.com/en-us/library/ee943321(v=vs.110).aspx
  2. Define the Test Configurations
    http://msdn.microsoft.com/en-us/library/dd286643.aspx
  3. Create or Import the Test Cases
    http://msdn.microsoft.com/en-us/library/dd380741.aspx
  4. Create a Test Plan for your backlog
    http://msdn.microsoft.com/en-us/library/dd380763.aspx
  5. Execute a Test Automation and Configure it
    http://msdn.microsoft.com/en-us/library/ee257067(v=vs.100).aspx
  6. Trigger automated tests after a build complete

Let’s have a look at each of these steps, or you can follow the MSDN link I have attached to each one of them.

#01 – Define your Environment

First of all you need to install an MTM Controller. Usually I install it on the same location of my main TFS 2013 instance (not the build servers …). After I have installed the Controller I can start to register my agents.

For the controller and agent installations and configuration follow this link:
http://msdn.microsoft.com/en-us/library/hh546459.aspx

Note: if you don’t have any agent registered in your Controller you will not be able to configure the environments. In my case I try to keep the Machines’ classification identical between Build, Deployment and Test tools. So, in my case, I have the following structure:

Staging > Production > Cloud

And this is the expected result in my MTM configuration.

image

After you install a new Agent remember to refresh the Dashboard. Also, if you are facing troubles registering the Agent, try to reboot the Controller and the Agent machines, sometimes it helped me to move forward with the registration.

And in my environment overview dashboard

image

One final note here if you choose to have an “external” virtualization mechanism and work without SCVMM you will not have access to some functionalities like reboot, clone and manage environments because they are not handled by SCVMM. ie if you are using VMWare

#02 – Create some configurations

Configurations are used by MTM to define different test environment scenario. Let’s assume that your MTM is testing a WPF Client Application, probably you want to know how it runs over multiple Operating Systems. For this and many other reasons, you can create inside MTM multiple configurations to test your application over multiple environments, operating systems, browsers and/or SQL Server instances.

The picture below show some of the configurations I use while testing a WPF Client application. I use different operating systems, different languages and different browsers to download the ClickOnce application. It should work exactly the same over all these configurations.

image

When I am done with this part, before assigning test plan to configurations and machines to configurations, I need to complete the setup of my set harness.

#03 – Create or import the Test Cases

After you are done with the configuration of MTM it’s time to prepare our backlog in order to be able to manage the tests execution. MTM requires that your tests are identified by a test case work item. In order to do that you have two options:

  • Manually create your test cases and associated them with an automation if you need to automate it, or create a manual test and register it within your backlog in TFS
  • Import your automation from an MsTest class library, using the tcm command: 
    tcm testcase 
      /collection: CollectionUrl 
      /teamproject:MyProject 
      /import 
      /storage:MyAssembly.dll 
      /category:"MyIntegrationTestCategory"

and at the end you will have your test cases created automatically for you like the following screenshot shows:

image

Now open MTM and go to Testing Center > Track > Queries and you can start to search for your test cases. In this phase you’ll notice how important is to keep a good and constant naming convention for your tests and to work with categories:

image

Why? Because with a proper naming convention you can create a query and group your work items in an easier way

#04 – Create a Test Plan

There are multiple ways of creating a test plan. You can create a test plan manually and then add a test case, one by one. This is quite useful if you are working on a new project and sprint by sprint you simply add the test cases as soon as you create them.

Another option, which I personally love (ndr), is to create a Test Suite composed by multiple Test Cases, generated by a query. Why is this very useful? Well first of all you don’t have to touch anymore because every time you add a new test case it will just be included in the Test Suite. Second, it will force you and your team to use a proper Test Naming Convention.

In my case, I know the Area of my tests, but I want to test only the PostSharp aspects, nothing else, so I can write a query like the following:

image

and associate the Suite query generated with a parent one, like I did in my projects. After a while you will end up with a series of test suite (test harness) grouped by a certain logic. For example you can have test suites generated by a DSL expression or by a test requirement created by a PO or a QA:

image

#05 – Run your Automation

Before running the automation you need to inform MTM about few things. If you think about it for a moment, when you execute local tests you usually have a test settings file which is used to inform MsTest about the assemblies that need to be loaded, plugins and other test requirements.

Inside MTM, you can inspect the settings by opening the test plan properties window.

Within this windows you can choose settings for a Local run but also for a Remote run. In my case, when I run a remote test I need to be sure that a specific file is deployed, so this is what I have done in my configuration:

image

And when I manually trigger a Test I just ensure that the right configuration is picked up, like here:

image

and that’s it. Now you know how to prepare MTM for automation, how to configure it and how to group and manage test suites. With this configuration in place you should be able to trigger tests in automation after a build is complete.

Last piece of the puzzle could be “how do I trigger those tests after my build is complete?” and here we come with the latest part of this tutorial.

#06 – Trigger automated tests after a build complete

With TFS 2013 we got a new Workflow Template called LabDefault template. In order to use it you have to create a new build and select this Template.

After you have setup the new build you can go in the Process tab and specify how you want to execute your automated tests.

For example, you can choose which environment will be used for your test harness:

image

Which Build output will be used for the tests. You can either trigger a new build or get the assemblies from the latest successful build or even trigger a new customized workflow on fly:

image

And what Test Plan you want to execute, where and how:

image

Conclusion

I hope you will find this post useful cause for me the configuration of MTM took a while and I truly struggled to find a decent but short post that highlights the steps that need to be done in order to have MTM working properly.

TFS 2013 Create a local build

With TFS we can have two different type of Build, local or remote. A remote build in triggered on a controller that doesn’t reside on your local PC. A local build is triggered on your local dev agent and it can also be “hidden” from the main queue build repository.

The scenario

My scenario is the following:

I have to commit a code change and I want to test the CI build locally before check-in my changes and commit the code to the main repository. I don’t want to work with Shelvesets cause I just don’t want to keep busy the main Build Controller. 

image

So, for every build you queue (local or remote) the build agent will just create a new workspace and download the required files that need to be built.

So in my local PC I will end up with the following situation:

image

Which is really inconvenient because it will just replicate my workspace for each build agent I am running locally and it won’t include the changes I didn’t commit to the repository.

So, first of all we want to instruct TFS to use a different strategy when running a local build than the strategy when running a remote build.

Second we want to instruct the build agent to execute the build within the workspace directory without creating a new workspace and without downloading the latest files from the source because our local workspace is the source.

How does TFS get the latest sources?

In order to understand my solution we need to have a look at how TFS build the workspace and what activities in the workflow are in charge of that. If you open the default build workflow (please refer here if you don’t know what I am talking about) you will find that is starts with the following activities:

image

Initialize environment

This activity setup the initial values for the Target folder, the bin folder and the test output folder. You want to get rid of this activity because it will override your workspace.

Get sources

This activity creates a new Workspace locally and download the latest code. You can pass a name for the workspace but unfortunately TFS will always drop the existing one and re-create it, so this activity should also be removed from your local build definition.

Convert the remote to local path

At this point we need to inform TFS about the project location. Because we didn’t generate a workspace, when we ask TFS to build $/MyProject/MyFile.cs it will bomb saying that he doesn’t know how to translate a server path into a local path. Actually the real error is a bit misleading cause it just says “I can’t find the file …

This error can be easily fixed by converting the projects to build into local path using the following TFS activities:

image

First I ask TFS to get an instance of my Workspace, which is the same I am using within Visual Studio. Then, for each project/solution configured in my build definition I update the path. The Workspace name is a Build Parameter in my workflow …

Last piece, we still need to build against a Workspace but the existing one, so in order to accomplish this kind of build we need to change the build path of the local agent in the following way:

image

Now when you ask to the workflow to convert Server to Local paths using your Workflow name, it will return a path pointing to the local workspace which is the same path configured in your build agents.

Note: Multiple agents can run on the same workflow path in parallel, which means a parallel build sequence Winking smile

Create new Octopus Release from TFS Build

In this article we will have a look at how we can automate the Octopus deployment using TFS build server. Every time a member of the team performs a check-in I want to execute a continuous build with the following workflow:

image

The first step is to change the default build workflow in TFS. Usually I clone the default build workflow and work with a new one, cause if something goes wrong I can easily rollback to the default build status.

First of all we need to create a new version of our build workflow, so I clone my CI build and its own workflow:

#01 – Clone the CI build
image

#02 – Clone the Workflow

In order to clone the workflow you just have to press the NEW button and locate the original workflow, or DOWNLOAD an existing one into your workspace:
image

Now, you need to locate a specific section of the workflow. We want to create a new release of our app only if everything went fine in the build but before the Gated Check-In is issued, because if we can’t publish to Octopus, the build still has to fail.

image

In my case I want to obtain the following output on my build in case of success or failure, plus I don’t want to publish a release if something went wrong in the build:

#01 – Build log
image

#02 – Build summary
SNAGHTML13155776

I also want to output a basic log so that I can debug my build just by reading the log.

Now the fun part, I need to execute the Octo.exe command from TFS in order to be able to publish my projects. I need to know few info that I will provide to my build workflow as output parameters:

image

Finally, I have to create a new task in my workflow that will execute the command. How?

image

The trick is inside the InvokeProcess activity. In this activity I simply call Octo.exe and use the Octopus API to publish my project into the Staging environment. This is the environment where I will run my Automated Tests.

I configured the activity in the following way:

image

You can find more information on how to call the Octopus API using Octo.exe here:
https://github.com/OctopusDeploy/Octopus-Tools/blob/master/readme.md

Hope this help

Deploy Database Project using Octopus

Octopus is a deployment tool that use the Nuget packaging mechanism to pack your application and deploy it over multiple environments.

Unfortunately it does not have a native support (yet) for Visual Studio Database project, so I had to provide a sort of workaround in my project structure to allow Octopus to deploy also a “Database Project Nuget package”.

Visual Studio .dacpac

Visual Studio Database project is capable to generate diff scripts, a full schema deployment script and also a post deployment script (in case you need to populate the database with some demo data, for example). When you compile a Database project this is the outcome:

image

As you can see we have two different .dacpac files. One for the master Database and one for my own database. A dacpac file is what is called “Data Tier Application” and it’s used within SQL Server to deploy a database schema.

Another interesting thing is the schema structure, in every database project you will have also a second output folder with the following structure:

image

And in the obj folder we have an additional output:

image

which contains a Model.xml file. This file can be used to integrate entity framework with our database schema. The postdeploy.sql is a custom script that we generate and execute after the database deployment.

Package everything with Nuget and OctoPack

So, what do we need in order to have a proper Nuget package of our database schema? Well, first of all let’s see what we should carry on in our package. Usually I create a package with the following structure:

image

The steps to obtain this structure are the following:

1 – Modify the database project to run OctoPack

  <Import 
        Project="$(SolutionDir)\.nuget\NuGet.targets" 
        Condition="Exists('$(SolutionDir)\.nuget\NuGet.targets')" />
  <Import 
        Project="$(SolutionDir)\.octopack\OctoPack.targets" />
</Project>

2 – Provide a .nuspec file with the following structure:

<?xml version="1.0"?>
<package xmlns="http://schemas.microsoft.com/packaging/2010/07/nuspec.xsd">
  <metadata>
    <!-- Your file specifications -->
  </metadata>
  <files>
    <!-- The Database Schema -->
    <file src="\dbo\**\*.sql" 
            target="Content\Schema"/>
    <!-- The deployment script -->
    <file src="\obj\**\*.sql" 
            target="Content\Deploy" />
    <file src="\obj\**\*.xml" 
            target="Content\Deploy" />
    <!-- Your .dacpac location -->
    <file src="..\..\..\..\..\..\bin\**\*.dacpac" 
            target="Content\Deploy" />
  </files>
</package>

And of course have your Build Server the RunOctoPack variable enabled.

Install the package using Powershell

The final step to make the package “digestable” by Octopus using PowerShell. In our specific case we need a power shell script that can execute the .dacpac package and the post deployment script. That’s quite easy.

In order to install a .dacpac with power shell we can use this command:

# load Dac Pac
add-type -path "C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\Microsoft.SqlServer.Dac.dll"

# make DacServices object, needs a connection string 
$d = new-object Microsoft.SqlServer.Dac.DacServices "server=(local)"

# Load dacpac from file & deploy to database named pubsnew 
$dp = [Microsoft.SqlServer.Dac.DacPackage]::Load($DacPacFile) 
$d.Deploy($dp, $DatabaseName, $true)

In my case I set some variables in Octopus in order to be able to dynamically create the database and locate the .dacpac file.

image

The final result is available through Octopus deployment console, cause I always set my PShell commands using | Write-Host at the end:

image

Final note: remember that the only way to stop a deployment step in Octopus using Power Shell is to return –1. In my case I wrap the code in a Try/Catch and return –1 if you want to stop the deployment but you can find a better explanation here.

Do you Test your applications?

When I talk about test with a collegue, a tech friend or anybody involved in Software Development, the conversation always ends up with a comparison between Unit Tests and Integration Tests. As far as I am aware of, I know four type of developers, the one that practices unit tests, the once that practices integration tests, the one that practices both and the one that doesn’t practice any test at all …

In my opinion just the fact that we compare the two techniques means that we don’t really apply the concept of test in our applications. Why? To make it more clear, let’s have a look at what is a Unit Test and what is an Integration Test, but also what is a Validation Test.

Unit Test
From Wikipedia: A unit test is a test for a specific unit of source code, it is in charge of proving that the specific unit of source code is fit for the use.

Integration Test
From Wikipedia: An integration test is a test that proves that multiple modules, covered by unit tests, can work together in integration.

Validation Test
From Wikipedia: A validation test is a test that proves that the entire application is able to satisfy the requirements and specifications provided by the entity who requested the application (customer).

We have three different type of tests for an application. They are quite different and indipendent from each other and in my opinion you can’t say an application “is working” only because you achieved 100% code coverage or because you have a nice set of Behavioral tests. You can say that your application is tested if:

The modules are created using TDD so that we know the code is 100% covered, then those modules are tested to work together in a “real environment, finally we should also verify that our result is the one expected by the customer.

An architecture to be tested

As a Software Architect I need to be sure that the entire architecture I am in charge of, is tested. This means that somehow, depending on the team/technology I am working with, I need to find a solution to test (unit, integration and validation) the final result and feel confident.

What do I mean with feel confident? I mean, I should be confident when the build is green that I can go to my Product Owner or Product Manager and says “Hey, a new release is available!”.

So, as usual for my posts, we need a sample architecture to continue the conversation.

image 

Here we have a classic multi-layer architecture, where we have the data, the data acces, the business logic and the presentation layer, logically separated. We can consider each layer a separate set of modules that need to speak to each other.

How do we test this?
In this case I use a specific pattern that I developed during the years. First, each piece of code requires a unit test, so that I can achieve the first test requirement, the unit test requirement. Then I need to create a test environment where I can run multiple modules together, to test the integration between them. When I am satisfied with these two typologies of tests, I need a QA that will verify the final result and achieve also the validation test.

Below I have just created an example of the different type of tests that I may come up with in order to be able to test the previosuly described architecture.

Unit Tests

First of all, the unit test. The unit test should be spreaded everywhere and achieve always the 100% code coverage. Why? Very simple, the concept behind TDD is to “write a test before write or modify a piece of code”; so if you code in this way, you will always have 100% code coverage … In the same way if you don’t achieve 100% code coverage, you are probably doing something wrong

Considering now our data layer, for example, I should have the following structure into my application. A module per context with a correspondent test module :

image 

This is enough to achieve 100% code coverage and comply to the first test category: unit test.

Integration Tests

Now that the functionalities are implemented and covered by a bunch of unit tests, I need to design and implement a test environment, an environment where I can deploy my code and test it in integration mode.

The schema below shows how I achieve this using Microsoft NET and the Windows platform. I am quite sure you can achieve the same result on a Unix/Linux platform and on other systems too.

image

In my case this is the sequence of actions triggered by a check-in:

  • The code is sent to a repository (TFS, Git, …)
  • The code is build and:
    • Unit tested
    • Code covered analysis
    • Code quality and style analysis
  • Then the code is deployed to a test machine
    • The database is deployed for test
    • The web tier is deployed for test
    • The integration tests are executed and analyzed

This set of actions can make my build green, red or partial green. From here I can decide if the code needs additional reviews or it can be deployed to a staging environment. This type of approach is able to prove that the integration between my modules is working properly.

Finally, validate everything with someone else

Now, if I got lucky enough I should have a green build, that in my specific case it is deployed in automation to a repository available on the cloud. In my specific case I use TFS Preview service and for the virtual machines I am using Azure Virtual Machines. Everything is deployed using some automation tools like Octopus Deploy, which allows me to automatically scripts all the steps required for my integration tests and for my deployment steps.

The same tool is used by the Product Owner to automate the QA, Staging and Production deployment of the final product. The interface allows you to select the version and the environment that needs to be upgraded or downgraded:

I can choose to “go live”, “go live and test” and many more options, without the needs to manually interact with the deployment/test process.

So, if you are still following me, the real concept of test is expressed only when you can easily achieve all the three phases of the test process: unit, integration and validation. The picture below represents the idea:

image

Hope it makes sense.