Category: Agile

Get started with Polymer 1.0 and Sublime Text 3.0

In a previous post I have explained how to get started with Polymer 1.0 and Webstorm 11. Unfortunately WebStorm is a paid product and not all of my followers can purchase a license (me neither, I am trying to see if JetBrains will grant me a teacher license for free).

So, I decided to create a new tutorial which explains how to get started with Polymer 1.0 using Sublime Text, a power and free text editor also used by Google Polymer’s Team. In this tutorial I will use more tools than in the previous one because SublimeText is just a text editor (powerful but still a text editor), so in order to run your app you also need an integrated web server like GULP and other tools.

Download the necessary tools

First of all, in order to have the proper environment setup correctly, especially in Windows, you MUST download all tools before you start to download SublimeText or Polymer, otherwise you will start the nightmare of “path not found” errors and similar.

  • GIT
    GIT is a well-known command tool for managing GIT repositories. Even if you come from an SVN or TFS (Team Foundation Server) environment I would suggest you to get started on GIT because even companies like Microsoft are moving their source code into GitHub or BitBucket repositories and GIT is used by BOWER to download and synchronize components from Polymer.
    https://git-scm.com/
  • NODE.js
    Node is a JavaScript runtime build on Chrome V8 JavaScript engine. If you work with Polymer your primary use of Node is with the command NPM, which will be used in parallel with BOWER.
    https://nodejs.org/en/
  • BOWER
    If you come from Java you may be aware of “Maven Central” while if you come from .NET you may be aware of “Nuget”. Well BOWER is the same concept applied to web development. It allows you to download “packages” of CSS, JavaScript and HTML files packed as “components”. Without Node.js you can’t use BOWER because it requires the NPM (Node Package Manager) command tool.
    http://bower.io/

So at this point you have your core tools registered and working correctly. Now it’s time to download SublimeText 3.0 and configure it in order to get it setup correctly. The download link is available here: https://www.sublimetext.com/3

Configure Sublime Text 3.0

After Sublime Text is installed you need to configure it in order to understand Polymer 1.0 and in order to being able to run Polymer using GULP.

Step #01 – Sublime Package Manager

Sublime provides an integrated console where you can run Sublime commands. You can open the console using two different techniques:

  • CTRL + `
  • Menu > View > Show Console

When the console is open, copy the script that will enable package manager, which is available here.

Step #02 – Install Sublime Plugins

Sublime comes out of the box with almost anything but in order to create a proper development environment for Polymer 1.0 we need some plugins:

Tip: CTRL + SHIFT + P will open “Package Manager”

SNAGHTML12c1e91

Below is a list of plugins that I personally believe you should have installed in order to be able to work with Polymer

  • Install Package > Autoprefixer
    image

    If you want a quick way to add vendor prefixes to your CSS, you can do so with this handy plugin.
  • Install Package > Emmet
    image
    Add some useful keyboard shortcuts and snippets to your text editor.
  • Install Package > HTML-CSS-JS Prettify
    image
    This extension gives you a command to format your HTML, CSS and JS. You can even prettify your files whenever your save a file.
  • Install Package > Git Gutter
    image
    Add a marker in the gutter wherever there is a change made to a file.
  • Install Package > Gutter Color
    image
    Gutter Color shows you a small color sample next to your CSS.

Step #03 – Create a new Project

Finally, we need to create a Sublime Text project in order to keep all our files in a good structure. First of all you need a folder, in my case I work in “C:\DEV” and in this case I am going to have a project folder called “C:\DEV\Polymer_First” where I will save my project structure.

Open Sublime Text and point to the menu > Project > Save Project As:

image

This will create the new project with an extension of .sublime-project. Then go again into View Menu and choose Sidebar or simply press CTRL + K, CTRL + B.

Initialize Polymer

Now we can finally initialize our Polymer project.

Click on Project > Add Folder to Project and choose your root folder so that your workspace and project structure are pointing to your root project.

Open your SHELL or Command Prompt or TERMINAL and point to your Sublime Text root path, which is in my case “C:\DEV\Polymer_First” and type bower init:

image

Then download the basic setup for polymer using:

  • bower install –save Polymer/polymer#^1.2.0
  • bower install –save PolymerElements/iron-elements
  • bower install –save PolymerElements/paper-elements

At the end you should have this structure which includes the first .html filder (Index.html):

image

Final step, which is the one I love most, is to install Sublime Server, which is nothing more than a very simple Phyton local webserver.

CTRL + P > Install Package > Sublime Server

And voila’, now you can right click an .HTML file inside your Sublime Text editor and choose “View in Browser” which is by default http://localhost:8080.

Final Note
This is just an overview of how to setup Sublime Text but if you come from a complex IDE like Visual Studio or IntelliJ I would kindly suggest you to spend some time on Sublime and download all plugins that will make your life much easier. There are tons of useful plugins for web development and some specific for Polymer like the following:

… and many more

Configure MTM 2013 to run automated tests

The Scenario

I have an MTM 2013 installation that is configured in the following way:

image

This is the workflow that is triggered when a Developer check-in something:

  1. The code is built by TFS 2013, using a TFS Build Agent
  2. The agent update a Nuget Package containing the deployed application
  3. Octopus release the package over our Staging environment
  4. MTM execute remote tests after the Build is complete

Configuring MTM 2013

In order to have a successful and pleasant experience with MTM 2013 we need to pre-configure in the proper way the test environment(s). If you don’t configure properly the Test machines, the Environments and/or the Test Cases you will have a lot of troubleshooting activities in your backlog … MTM is quite articulated.

The time I am writing this article is April 2014 and MTM came out a while ago, so after you install it you may face some missing values in the operating systems or in the browsers list. So, first of all, let’s update these value lists.

Open MTM and Choose Testing Center>Test Configuration Manager>Manage configuration variables. In my case I extended the values in the following way:

image

You can also go directly to the source and change the XML entries. In order to change the correct file I would suggest you to visit this useful MSDN page:
http://msdn.microsoft.com/en-us/library/ms243856.aspx

Now that I have my value lists updated I can start with the configuration process. I have highlighted below the steps you should follow in order to have a proper MTM configuration.

  1. Define the Environment
    http://msdn.microsoft.com/en-us/library/ee943321(v=vs.110).aspx
  2. Define the Test Configurations
    http://msdn.microsoft.com/en-us/library/dd286643.aspx
  3. Create or Import the Test Cases
    http://msdn.microsoft.com/en-us/library/dd380741.aspx
  4. Create a Test Plan for your backlog
    http://msdn.microsoft.com/en-us/library/dd380763.aspx
  5. Execute a Test Automation and Configure it
    http://msdn.microsoft.com/en-us/library/ee257067(v=vs.100).aspx
  6. Trigger automated tests after a build complete

Let’s have a look at each of these steps, or you can follow the MSDN link I have attached to each one of them.

#01 – Define your Environment

First of all you need to install an MTM Controller. Usually I install it on the same location of my main TFS 2013 instance (not the build servers …). After I have installed the Controller I can start to register my agents.

For the controller and agent installations and configuration follow this link:
http://msdn.microsoft.com/en-us/library/hh546459.aspx

Note: if you don’t have any agent registered in your Controller you will not be able to configure the environments. In my case I try to keep the Machines’ classification identical between Build, Deployment and Test tools. So, in my case, I have the following structure:

Staging > Production > Cloud

And this is the expected result in my MTM configuration.

image

After you install a new Agent remember to refresh the Dashboard. Also, if you are facing troubles registering the Agent, try to reboot the Controller and the Agent machines, sometimes it helped me to move forward with the registration.

And in my environment overview dashboard

image

One final note here if you choose to have an “external” virtualization mechanism and work without SCVMM you will not have access to some functionalities like reboot, clone and manage environments because they are not handled by SCVMM. ie if you are using VMWare

#02 – Create some configurations

Configurations are used by MTM to define different test environment scenario. Let’s assume that your MTM is testing a WPF Client Application, probably you want to know how it runs over multiple Operating Systems. For this and many other reasons, you can create inside MTM multiple configurations to test your application over multiple environments, operating systems, browsers and/or SQL Server instances.

The picture below show some of the configurations I use while testing a WPF Client application. I use different operating systems, different languages and different browsers to download the ClickOnce application. It should work exactly the same over all these configurations.

image

When I am done with this part, before assigning test plan to configurations and machines to configurations, I need to complete the setup of my set harness.

#03 – Create or import the Test Cases

After you are done with the configuration of MTM it’s time to prepare our backlog in order to be able to manage the tests execution. MTM requires that your tests are identified by a test case work item. In order to do that you have two options:

  • Manually create your test cases and associated them with an automation if you need to automate it, or create a manual test and register it within your backlog in TFS
  • Import your automation from an MsTest class library, using the tcm command: 
    tcm testcase 
      /collection: CollectionUrl 
      /teamproject:MyProject 
      /import 
      /storage:MyAssembly.dll 
      /category:"MyIntegrationTestCategory"

and at the end you will have your test cases created automatically for you like the following screenshot shows:

image

Now open MTM and go to Testing Center > Track > Queries and you can start to search for your test cases. In this phase you’ll notice how important is to keep a good and constant naming convention for your tests and to work with categories:

image

Why? Because with a proper naming convention you can create a query and group your work items in an easier way

#04 – Create a Test Plan

There are multiple ways of creating a test plan. You can create a test plan manually and then add a test case, one by one. This is quite useful if you are working on a new project and sprint by sprint you simply add the test cases as soon as you create them.

Another option, which I personally love (ndr), is to create a Test Suite composed by multiple Test Cases, generated by a query. Why is this very useful? Well first of all you don’t have to touch anymore because every time you add a new test case it will just be included in the Test Suite. Second, it will force you and your team to use a proper Test Naming Convention.

In my case, I know the Area of my tests, but I want to test only the PostSharp aspects, nothing else, so I can write a query like the following:

image

and associate the Suite query generated with a parent one, like I did in my projects. After a while you will end up with a series of test suite (test harness) grouped by a certain logic. For example you can have test suites generated by a DSL expression or by a test requirement created by a PO or a QA:

image

#05 – Run your Automation

Before running the automation you need to inform MTM about few things. If you think about it for a moment, when you execute local tests you usually have a test settings file which is used to inform MsTest about the assemblies that need to be loaded, plugins and other test requirements.

Inside MTM, you can inspect the settings by opening the test plan properties window.

Within this windows you can choose settings for a Local run but also for a Remote run. In my case, when I run a remote test I need to be sure that a specific file is deployed, so this is what I have done in my configuration:

image

And when I manually trigger a Test I just ensure that the right configuration is picked up, like here:

image

and that’s it. Now you know how to prepare MTM for automation, how to configure it and how to group and manage test suites. With this configuration in place you should be able to trigger tests in automation after a build is complete.

Last piece of the puzzle could be “how do I trigger those tests after my build is complete?” and here we come with the latest part of this tutorial.

#06 – Trigger automated tests after a build complete

With TFS 2013 we got a new Workflow Template called LabDefault template. In order to use it you have to create a new build and select this Template.

After you have setup the new build you can go in the Process tab and specify how you want to execute your automated tests.

For example, you can choose which environment will be used for your test harness:

image

Which Build output will be used for the tests. You can either trigger a new build or get the assemblies from the latest successful build or even trigger a new customized workflow on fly:

image

And what Test Plan you want to execute, where and how:

image

Conclusion

I hope you will find this post useful cause for me the configuration of MTM took a while and I truly struggled to find a decent but short post that highlights the steps that need to be done in order to have MTM working properly.

TFS 2013 Create a local build

With TFS we can have two different type of Build, local or remote. A remote build in triggered on a controller that doesn’t reside on your local PC. A local build is triggered on your local dev agent and it can also be “hidden” from the main queue build repository.

The scenario

My scenario is the following:

I have to commit a code change and I want to test the CI build locally before check-in my changes and commit the code to the main repository. I don’t want to work with Shelvesets cause I just don’t want to keep busy the main Build Controller. 

image

So, for every build you queue (local or remote) the build agent will just create a new workspace and download the required files that need to be built.

So in my local PC I will end up with the following situation:

image

Which is really inconvenient because it will just replicate my workspace for each build agent I am running locally and it won’t include the changes I didn’t commit to the repository.

So, first of all we want to instruct TFS to use a different strategy when running a local build than the strategy when running a remote build.

Second we want to instruct the build agent to execute the build within the workspace directory without creating a new workspace and without downloading the latest files from the source because our local workspace is the source.

How does TFS get the latest sources?

In order to understand my solution we need to have a look at how TFS build the workspace and what activities in the workflow are in charge of that. If you open the default build workflow (please refer here if you don’t know what I am talking about) you will find that is starts with the following activities:

image

Initialize environment

This activity setup the initial values for the Target folder, the bin folder and the test output folder. You want to get rid of this activity because it will override your workspace.

Get sources

This activity creates a new Workspace locally and download the latest code. You can pass a name for the workspace but unfortunately TFS will always drop the existing one and re-create it, so this activity should also be removed from your local build definition.

Convert the remote to local path

At this point we need to inform TFS about the project location. Because we didn’t generate a workspace, when we ask TFS to build $/MyProject/MyFile.cs it will bomb saying that he doesn’t know how to translate a server path into a local path. Actually the real error is a bit misleading cause it just says “I can’t find the file …

This error can be easily fixed by converting the projects to build into local path using the following TFS activities:

image

First I ask TFS to get an instance of my Workspace, which is the same I am using within Visual Studio. Then, for each project/solution configured in my build definition I update the path. The Workspace name is a Build Parameter in my workflow …

Last piece, we still need to build against a Workspace but the existing one, so in order to accomplish this kind of build we need to change the build path of the local agent in the following way:

image

Now when you ask to the workflow to convert Server to Local paths using your Workflow name, it will return a path pointing to the local workspace which is the same path configured in your build agents.

Note: Multiple agents can run on the same workflow path in parallel, which means a parallel build sequence Winking smile

Create new Octopus Release from TFS Build

In this article we will have a look at how we can automate the Octopus deployment using TFS build server. Every time a member of the team performs a check-in I want to execute a continuous build with the following workflow:

image

The first step is to change the default build workflow in TFS. Usually I clone the default build workflow and work with a new one, cause if something goes wrong I can easily rollback to the default build status.

First of all we need to create a new version of our build workflow, so I clone my CI build and its own workflow:

#01 – Clone the CI build
image

#02 – Clone the Workflow

In order to clone the workflow you just have to press the NEW button and locate the original workflow, or DOWNLOAD an existing one into your workspace:
image

Now, you need to locate a specific section of the workflow. We want to create a new release of our app only if everything went fine in the build but before the Gated Check-In is issued, because if we can’t publish to Octopus, the build still has to fail.

image

In my case I want to obtain the following output on my build in case of success or failure, plus I don’t want to publish a release if something went wrong in the build:

#01 – Build log
image

#02 – Build summary
SNAGHTML13155776

I also want to output a basic log so that I can debug my build just by reading the log.

Now the fun part, I need to execute the Octo.exe command from TFS in order to be able to publish my projects. I need to know few info that I will provide to my build workflow as output parameters:

image

Finally, I have to create a new task in my workflow that will execute the command. How?

image

The trick is inside the InvokeProcess activity. In this activity I simply call Octo.exe and use the Octopus API to publish my project into the Staging environment. This is the environment where I will run my Automated Tests.

I configured the activity in the following way:

image

You can find more information on how to call the Octopus API using Octo.exe here:
https://github.com/OctopusDeploy/Octopus-Tools/blob/master/readme.md

Hope this help

Deploy Database Project using Octopus

Octopus is a deployment tool that use the Nuget packaging mechanism to pack your application and deploy it over multiple environments.

Unfortunately it does not have a native support (yet) for Visual Studio Database project, so I had to provide a sort of workaround in my project structure to allow Octopus to deploy also a “Database Project Nuget package”.

Visual Studio .dacpac

Visual Studio Database project is capable to generate diff scripts, a full schema deployment script and also a post deployment script (in case you need to populate the database with some demo data, for example). When you compile a Database project this is the outcome:

image

As you can see we have two different .dacpac files. One for the master Database and one for my own database. A dacpac file is what is called “Data Tier Application” and it’s used within SQL Server to deploy a database schema.

Another interesting thing is the schema structure, in every database project you will have also a second output folder with the following structure:

image

And in the obj folder we have an additional output:

image

which contains a Model.xml file. This file can be used to integrate entity framework with our database schema. The postdeploy.sql is a custom script that we generate and execute after the database deployment.

Package everything with Nuget and OctoPack

So, what do we need in order to have a proper Nuget package of our database schema? Well, first of all let’s see what we should carry on in our package. Usually I create a package with the following structure:

image

The steps to obtain this structure are the following:

1 – Modify the database project to run OctoPack

  <Import 
        Project="$(SolutionDir)\.nuget\NuGet.targets" 
        Condition="Exists('$(SolutionDir)\.nuget\NuGet.targets')" />
  <Import 
        Project="$(SolutionDir)\.octopack\OctoPack.targets" />
</Project>

2 – Provide a .nuspec file with the following structure:

<?xml version="1.0"?>
<package xmlns="http://schemas.microsoft.com/packaging/2010/07/nuspec.xsd">
  <metadata>
    <!-- Your file specifications -->
  </metadata>
  <files>
    <!-- The Database Schema -->
    <file src="\dbo\**\*.sql" 
            target="Content\Schema"/>
    <!-- The deployment script -->
    <file src="\obj\**\*.sql" 
            target="Content\Deploy" />
    <file src="\obj\**\*.xml" 
            target="Content\Deploy" />
    <!-- Your .dacpac location -->
    <file src="..\..\..\..\..\..\bin\**\*.dacpac" 
            target="Content\Deploy" />
  </files>
</package>

And of course have your Build Server the RunOctoPack variable enabled.

Install the package using Powershell

The final step to make the package “digestable” by Octopus using PowerShell. In our specific case we need a power shell script that can execute the .dacpac package and the post deployment script. That’s quite easy.

In order to install a .dacpac with power shell we can use this command:

# load Dac Pac
add-type -path "C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\Microsoft.SqlServer.Dac.dll"

# make DacServices object, needs a connection string 
$d = new-object Microsoft.SqlServer.Dac.DacServices "server=(local)"

# Load dacpac from file & deploy to database named pubsnew 
$dp = [Microsoft.SqlServer.Dac.DacPackage]::Load($DacPacFile) 
$d.Deploy($dp, $DatabaseName, $true)

In my case I set some variables in Octopus in order to be able to dynamically create the database and locate the .dacpac file.

image

The final result is available through Octopus deployment console, cause I always set my PShell commands using | Write-Host at the end:

image

Final note: remember that the only way to stop a deployment step in Octopus using Power Shell is to return –1. In my case I wrap the code in a Try/Catch and return –1 if you want to stop the deployment but you can find a better explanation here.

Do you Test your applications?

When I talk about test with a collegue, a tech friend or anybody involved in Software Development, the conversation always ends up with a comparison between Unit Tests and Integration Tests. As far as I am aware of, I know four type of developers, the one that practices unit tests, the once that practices integration tests, the one that practices both and the one that doesn’t practice any test at all …

In my opinion just the fact that we compare the two techniques means that we don’t really apply the concept of test in our applications. Why? To make it more clear, let’s have a look at what is a Unit Test and what is an Integration Test, but also what is a Validation Test.

Unit Test
From Wikipedia: A unit test is a test for a specific unit of source code, it is in charge of proving that the specific unit of source code is fit for the use.

Integration Test
From Wikipedia: An integration test is a test that proves that multiple modules, covered by unit tests, can work together in integration.

Validation Test
From Wikipedia: A validation test is a test that proves that the entire application is able to satisfy the requirements and specifications provided by the entity who requested the application (customer).

We have three different type of tests for an application. They are quite different and indipendent from each other and in my opinion you can’t say an application “is working” only because you achieved 100% code coverage or because you have a nice set of Behavioral tests. You can say that your application is tested if:

The modules are created using TDD so that we know the code is 100% covered, then those modules are tested to work together in a “real environment, finally we should also verify that our result is the one expected by the customer.

An architecture to be tested

As a Software Architect I need to be sure that the entire architecture I am in charge of, is tested. This means that somehow, depending on the team/technology I am working with, I need to find a solution to test (unit, integration and validation) the final result and feel confident.

What do I mean with feel confident? I mean, I should be confident when the build is green that I can go to my Product Owner or Product Manager and says “Hey, a new release is available!”.

So, as usual for my posts, we need a sample architecture to continue the conversation.

image 

Here we have a classic multi-layer architecture, where we have the data, the data acces, the business logic and the presentation layer, logically separated. We can consider each layer a separate set of modules that need to speak to each other.

How do we test this?
In this case I use a specific pattern that I developed during the years. First, each piece of code requires a unit test, so that I can achieve the first test requirement, the unit test requirement. Then I need to create a test environment where I can run multiple modules together, to test the integration between them. When I am satisfied with these two typologies of tests, I need a QA that will verify the final result and achieve also the validation test.

Below I have just created an example of the different type of tests that I may come up with in order to be able to test the previosuly described architecture.

Unit Tests

First of all, the unit test. The unit test should be spreaded everywhere and achieve always the 100% code coverage. Why? Very simple, the concept behind TDD is to “write a test before write or modify a piece of code”; so if you code in this way, you will always have 100% code coverage … In the same way if you don’t achieve 100% code coverage, you are probably doing something wrong

Considering now our data layer, for example, I should have the following structure into my application. A module per context with a correspondent test module :

image 

This is enough to achieve 100% code coverage and comply to the first test category: unit test.

Integration Tests

Now that the functionalities are implemented and covered by a bunch of unit tests, I need to design and implement a test environment, an environment where I can deploy my code and test it in integration mode.

The schema below shows how I achieve this using Microsoft NET and the Windows platform. I am quite sure you can achieve the same result on a Unix/Linux platform and on other systems too.

image

In my case this is the sequence of actions triggered by a check-in:

  • The code is sent to a repository (TFS, Git, …)
  • The code is build and:
    • Unit tested
    • Code covered analysis
    • Code quality and style analysis
  • Then the code is deployed to a test machine
    • The database is deployed for test
    • The web tier is deployed for test
    • The integration tests are executed and analyzed

This set of actions can make my build green, red or partial green. From here I can decide if the code needs additional reviews or it can be deployed to a staging environment. This type of approach is able to prove that the integration between my modules is working properly.

Finally, validate everything with someone else

Now, if I got lucky enough I should have a green build, that in my specific case it is deployed in automation to a repository available on the cloud. In my specific case I use TFS Preview service and for the virtual machines I am using Azure Virtual Machines. Everything is deployed using some automation tools like Octopus Deploy, which allows me to automatically scripts all the steps required for my integration tests and for my deployment steps.

The same tool is used by the Product Owner to automate the QA, Staging and Production deployment of the final product. The interface allows you to select the version and the environment that needs to be upgraded or downgraded:

I can choose to “go live”, “go live and test” and many more options, without the needs to manually interact with the deployment/test process.

So, if you are still following me, the real concept of test is expressed only when you can easily achieve all the three phases of the test process: unit, integration and validation. The picture below represents the idea:

image

Hope it makes sense.

Domain Model in Brief

This series is getting quite interesting so I decided to post something about Domain Model. Just a little introduction to understand what is and what is not Domain Model.

Honestly this pattern is often misunderstood. I see tons of examples of anemic models that reflect 1:1 the database structure, mirroring the classic Active Record pattern. But Domain Model is more than that …

Let’s start as usual with a definition, and we take this definition from one of the founder of the Domain Model pattern, Martin Fowler:

An object model of the domain that incorporates both behavior and data.

But unfortunately I almost never see the first part (behaviors). It’s easy to create an object graph, completely anemic, and then hydrate the graph using an ORM that retrieves the data from the database; but if we skip the logical part of the object graph, we are still far from having a correct Domain Model.

This means that we can have anemic objects that represent our data structure, but we can’t say that we are using Domain Model.

Our Domain Model

In order to better understand the Domain Model pattern we need a sample Domain, something that aggregates together multiple objects with a common scope.

In my case, I have an Order Tracking System story that I want to implement using the Domain Model pattern. My story has a main requirement:

As a Customer I want to register my Order Number and e-mail, in order to receive updates about the Status of my Order

So, let’s draw few acceptance criteria to register an Order:

  • An Order can be created only by a user of type Administrator and should have a valid Order id
  • An Order can be changed only by a user of type Administrator
  • Any user can query the status of an Order
  • Any user can subscribe and receive updates about Order Status changes using an e-mail address

If we want to represent this Epic using a Use Case diagram, we will probably end up with something like this:

image

Great, now that we have the specifications, we can open Visual Studio (well I did it previously when I created the diagram …) and start to code. Or better, start to make more investigations about our Domain objects.

It’s always better to have an iteration 0 in DDD when you start to discover the Domain and the requirements together with your team. I usually discover my Domain using mockups like the following one, where I share ideas and concepts in a fancy way.

Snapshot

Create a new Order

An Agent can created an Order and the Order should have a proper Order Id. The order id should reflect some business rules so it’s ok to have this piece of validation logic inside our domain object. We can also say that the Order Id is a requirement for the Order object because we can’t create an Order object without passing a valid Order Id. So, it makes absolutely sense to encapsulate this concept into the Order entity.

A simple test to cover this scenario would be something like this:

[Fact]
public void OrderInitialStateIsCreated()
{
    this
        .Given(s => s.GivenThatAnOrderIdIsAvailable())
        .When(s => s.WhenCreateANewOrder())
        .Then(s => s.ThenTheOrderShouldNotBeNull())
            .And(s => s.TheOrderStateShouldBe(OrderState.Created))
        .BDDfy("Set Order initial Status");
}

image

If you don’t know what [Fact] is, it is used by xUnit, an alternative test framework that can run within Visual Studio 2013 test runner using the available nuget test runner extension package.

From now on, we have our first Domain Entity that represents the concept of Order. This entity will be my Aggregate root, an entity that bounds together multiple objects of my Domain, in charge of guarantee consistency of changes made to those objects.

Domain Events

M. Fowler defines a Domain event in the following way:

A Domain event is an event that captures things in charge of changing the state of your application

Now, if we change the Order Status we want to be sure that an event is fired by the Order object, which inform us about the Status change. Of course this event will not be triggered when we create a new Order object. The event should contains the Order Id and the new Status. In this way we will have the key information for our domain object and we may not be required to repopulate the object from the database.

public void SetState(OrderState orderState)
{
    state = orderState;
    if(OrderStateChanged != null)
    {
        OrderStateChanged(new OrderStateChangedArgs(this.orderId, this.state));
    }
}

Using the Domain Event I can easily track the changes that affect my Order Status and rebuild the status in a specific point in time (I may be required to investigate the order). In the same time I can easily verify the current status of my Order by retrieving the latest event triggered by the Order object.

The ubiquitous language

With my Order object create, I need now to verify that the behaviors assigned to it are correct, and the only way to verify that is to contact a Domain expert, somebody expert in the field of Order Tracking System. Probably witht this person I will have to speak a common language, the ubiquitous language mentioned by Eric Evans.

With this language in place, I can easily write some Domain specifications that guarantee the correctness of the Domain logic, and let the Domain Expert verifies it.

image

This very verbose test, is still a unit test and it runs in memory. So I can easily introduce BDD into my application without the requirement of having an heavy test fitness behind my code. BDDfy allows me to produce also a nice documentation for the QA, in order to analyze the logical paths required by the acceptance criteria.

Plus, I am not working anymore with mechanical code but I am building my own language that I can share with my team and with the analysts.

Only an Administrator can Create and Change an Order

Second requirement, now that we know how to create an Order and what happen when somebody changes the Order Status, we can think of creating an object of type User and distinguish between a normal user and an administrator. We need to keep this concept again inside the Domain graph. Why? Well, very simple, because we choose to work with the Domain Model pattern, so we need to keep logic, behaviors and data within the same object graph.

So, the requirements are the following:

  • When we create an Order we need to know who you are
  • When we change the Order status, we need to know who you are

In Domain Driven Design we need to give responsibility of this action to somebody, that’s overall the logic that you have to apply when designing a Domain Model. Identify the responsibility and the object in charge of it.

In our case we can say to the Order object that, in order to be created, it requires an object of type User and verify that the user passed is of type Administrator. This is one possible option, another one could be to involve an external domain service, but in my opinion is not a crime if the Order object is aware of the concept being an administrator.

So below are my refactored Behavior tests:

image

The pre-conditions raised in the previous tests:

  • order id is valid
  • user not null
  • user is an administrator

are inside my Order object constructor, because in DDD and more precisely in my Use Case, it doesn’t make any sense to have an invalid Order object. So the Order object is responsible to verify the data provided in the constructor is valid.

private Order(string orderId, User user)
{
    Condition
       .Requires(orderId)
       .Contains("-","The Order Id has invalid format");
    Condition
       .Requires(user)
       .IsNotNull("The User is null");
    Condition
       .Requires(user.IsAdmin)
       .IsTrue("The User is not Administrator");

    this.orderId = orderId;
}

Note: for my assertions I am using a nice project called Conditions that allows you to write this syntax.

Every time I have an Order object in my hands I know already that is valid, because I can’t create an invalid Order object.

Register for Updates

Now that we know how to create an Order we need to wrap the event logic into a more sophisticated object. We should have a sort of Event broker also known as Message broker able to monitor for events globally.

Why? Because I can imagine that in my CQRS architecture I will have a process manager that will receive commands and execute them in sequence; while the commands will execute the process manager will also interact with the events raised by the Domain Models involved in the process.

I followed the article of Udi Dahan available here and I found a nice and brilliant solution of creating objects that act like Domain Events.

The final result is something like this:

public void SetState(OrderState orderState)
{
    state = orderState;
    DomainEvents.Raise(
        new OrderStateChangedEvent(this.orderId, this.state));
}

The DomainEvents component is a global component that use an IoC container in order to “resolve” the list of subscribers to a specific event.

Next problem, notify by e-mail

When the Order Status changes, we should persist the change somewhere and we should also notify the environment, probably using a Message Broker.

We have multiple options here, I can just picture some of them, for example:

  • We can associate an Action<T> to our event, and raise the action that call an E-mail service
  • We can create a command handler in a different layer that will send an E-mail
  • We can create a command handler in a different layer that will send a Message to a Queue, this Queue will redirect the Message to an E-mail service

The first two options are easier and synchronous, while the third one would be a more enterprise solution.

The point is that we should decide who is responsible to send the e-mail and if the Domain Model should be aware of this requirement.

In my opinion, in our specific case, we have an explicit requirement, whatever there is a subscribed user, we should notify. Now, if we are smart we can say it should notify to a service and keep the Domain Model loosely coupled from the E-mail concept.

So we need to provide a mechanism to allow a User to register for an Update and verify that the User receives an E-mail.

image

I just need a way to provide an E-mail service, a temporary one, that I will implement later when I will be done with my domain. In this case Mocking is probably the best option here, and because my DomainEvents manager is providing methods using C# generics, I can easily fake any event handler that I want:

var handlerMock = 
      DomainEvents.Container
         .Resolve<Mock<IHandles<OrderStateChangedEvent>>>();
handlerMock
      .Verify(x => x.Handle
                        (It.IsAny<OrderStateChangedEvent>()), 
                         Times.Once());

Now, if you think for a second, the IHandles interface could be any contract:

  • OrderStateChangedEmailHandler
  • OrderStateChangedPersistHandler
  • OrderStateChangedBusHandler

Probably you will have an Infrastructure service that will provide a specific handler, a data layer that will provide another handler and so on. The business logic stays in the Domain Model and the event implementation is completely loosely coupled from the Domain. Every time the Domain raise an event, then the infrastructure will catch it and broadcast it to the appropriate handler(s).

Conclusion

The sample shown previously is a very simple object graph (1 object) that provides behaviors and data. It fits perfectly into BDD because the business logic is behavior driven thanks to the ubiquitous language and it does not involve any external dependency (services or persistence).

We can test the logic of our Domain in complete isolation without having the disadvantage of an anemic Domain that does not carry any “real” business value. We built a language that can be shared between the team and the analysts and we have tests that speak.

Probably an additional functionality could be to design the Commands in charge of capture the intent of the user and start to test the complete behavior. So that we will have an infrastructure where the system intercepts commands generated by the User and use these command to interact with the Domain Model.

Again, even for Domain Model the rule is the same than any other pattern (architectural and not). It does not require a Data Layer, it does not require a Service Bus and so on. The Domain Model should contain behaviors and data, then the external dependencies are external dependencies …

Now, if you will do your homeworks properly, you should be able to have some good fitness around your Domain Model like the following one:

image

BDD frameworks for NET

When I work in a project that implies Business Logic and Behaviors I usually prefer to write my unit tests using a Behavioral approach, because in this way I can easily projects my acceptance criteria into readable piece of code.

Right now I am already on Visual Studio 2013 and I am struggling a bit with the internal test runner UX, because it doesn’t really fit the behavioral frameworks I am used to work with.

So I decided to try out some behavioral frameworks with Visual Studio 2013. I have a simple piece of functionality that I want to test. When you create a new Order object, if the Order Id is not provided with the right format, the class Order will throw an ArgumentException.

If I want to translate this into a more readable BDD sentence, I would say:

Given that an Order should be created
When I create a new Order
Then the Order should be created only if the Order Id has a valid format
Otherwise I should get an error\

So let’s see now the different ways of writing this test …

XBehave

XBehave is probably the first framework I tried for behavioral driven tests. The implementation is easy but in order to get it working properly you have to write your tests on top of xUnit test framework, and of course you have also to install the xUnit test runner plugin for Visual Studio.

In order to write a test, you need to provide the Given, When and Then implementations. Each step has a description and a delegate that contains the code to be tested. Finally, each method must be decorated with a [Scenario] attribute.

[Scenario]
[Example("ABC-12345-ABC")]
public void CreatingAnOrderWithValidData(string orderId, Order order)
{
    _
        .When("creating a new Order",
                   () => order = Order.Create(orderId))            
        .Then("the Order should be created",
                   () => order.Should().NotBeNull());
}

In this specific case I am using the [Example] attribute to provide dynamic data to my test, so that I can test multiple scenarios in one shot.

image

The problem? For each step of my test, Visual Studio test runner identifies a unique test, which is not really true and quite unreadable because this is one method, one test but in Visual Studio I see a test row for each delegate passed to XBehave.

NSpec

NSpec is another behavioral framework that allows you to write human readable tests with C#. The installation is easy, you just need to download the Nuget package, the only problem is the interaction with Visual Studio. In order to run the tests you need to call the test runner of NSpec, which is not integrated into the test runner of Visual Studio, so you need to read the test results on a console window, without the availability to interact with the failed tests.

In order to test your code you need to provide the Given,When and Then like with any other behavioral framework. The difference is that this framework is making an extensive use of magic strings and it assumes the developer will do the same.

void given_an_order_is_created_with_valid_data()
{
    before = () => order = Order.Create("xxx-11111-xxx");
    it["the order should be created"] = 
         () => order.should_not_be_null();
}

And this is the result you will get in Visual Studio Nuget package console (as you can see, not really intuitive and easy to manage):

image 

Honestly the test syntax is quite nice and readable but I found annoying and silly that I have to run my tests over a console screen, come’n the interaction with Visual Studio UI is quite well structured and this framework should at least prints the output of the tests into the test runner window.

StoryQ

I came to StoryQ just few days ago, I was googling about “fluent behavioral framework”. The project is still on www.codeplex.com but it looks pretty inactive, so take it as is. There are few bugs and code constraints that force you to implement a very thigh code syntax, thigh to the StoryQ conventions.

The syntax is absolutely fluent and it’s very readable, compared to other behavioral frameworks, I guess this is the nearest to the developer style.

new StoryQ.Story("Create Order")
.InOrderTo("Create a valid Order")
.AsA("User")
.IWant("A valid Order object")

.WithScenario("Valid order id")
    .Given(IHaveAValidOrderId)
        .And(OrderIsInitialized)
    .When(ANewOrderIsCreated)
    .Then(TheOrderShouldNotBeNull)

.ExecuteWithReport(MethodBase.GetCurrentMethod());

And the output is fully integrated into Visual Studio test runner, just include the previous code into a test method of your preferred unit test framework.

image

The output is exactly what we need, a clear BDD output style with a reference to each step status (succeed, failed, exception …)

BDDify

BBDify is the lightest framework of the one I tried out and honestly is also the most readable and easy to implement. You can decide to use the Fluent API or the standard conventions. With both implementations you can easily build a dictionary of steps that can be recycled over and over.

In my case I have created a simple Story in BDDfy and represented the story using C# and no magic strings:

[Fact]
public void OrderIsCreatedWithValidId()
{
    this.
        Given(s => s.OrderIdIsAvailable())
        .When(s => s.CreateANewOrder())
        .Then(s => s.OrderShouldNotBeNull())
        .BDDfy("Create a valid order");
}

And the test output really fit into Visual Studio test runner, plus I can also retrieve in the test output my User Story description, so that my QAs can easily interact with the tests results:

image

I really like this layout style because I can already picture this layout into my TFS reports.

I hope you enjoyed!

Where is the Magic wand?

Smiling Magic Wand

I am always wondering where I was the day they distributed the Magic Wand. I mean, every time I join a new Team/Project I always have to carry with me a set of tools that make my job easier. Some of these tools are within Visual Studio, some are pre-defined architecture diagrams that I created with Visio or Balsamiq, some are links to articles and books that I share with the teams.

Unfortunately, very often I see the demand of a Magic Wand, a tool that I don’t carry with me just because I simply don’t have it and probably I’ll never have!

Now let’s make the concept more clear, there are usually three different situations where you will be required the use of a Magic Wand:

  1. First, most common, the impossible timeline. You get a request from a Stakeholder or even a Product Owner to accomplish a task in a time that it’s human impossible
  2. Second the Ferrari buyer, you are required to design a functionality or a set of functionalities with a budget that is waaaaay to low than the minimum required
  3. Third the Tetris puzzle, you are required to add a functionality to an existing structure, but the existing structure does not allow you to implement the functionality and you don’t have time/resources/space to refactor the existing code

Of course there are a lot more situations where you are required to provide a Magic Wand, the three reasons mentioned previously are common in my job, and this is how I usually try to tackle them, even if this doesn’t mean that my solution is always the right one …

The impossible timeline

Screen Shot 2013 07 18 at 11 35 04 AM

You have a meeting with your Product Owner and you discover right away that he wants you to implement a very nice piece of functionality. It requires some refactoring of the current code, a bit of investigation on your side, and probably a couple of weeks between coding/testing and update the documentation. Wow, great, you know that you’ll be busy in the next months with something very cool so you are all trilled and start to discuss with the PO a draft of a backlog that you have created previously.

Right away you discover that your backlog is simply non achievable. You estimated three sprints for a total of almost two months of works while your PO has already said to the Stakeholders that it won’t take more than a couple of weeks!

In this case, the only thing that we can do is to draw the backlog, probably using a Story Map approach, and share the backlog with Stakeholders and POs together, in order to show what is the real amount of work required. I usually work with Balsamiq and I create story maps that look like the one on your left.

Using this approach you can clearly show to the Stakeholders that in order to make an Order, for example, you need to create few things like: infrastructure, HTML views, REST methods and so on. For each task you can clearly identify how long it will take and that will probably give to them a better picture of what’s needed to be done.

The Ferrari buyer

The second situation where I am usually forced to use a Magic Wand is when I encounter the Ferrari buyers. Why do I call them Ferrari buyer? Well because those type of Customers/Stakeholders or whatever you want to call them, they are usually looking for a Ferrari master piece but with the budget of an old Fiat. That’s why they struggle to find a “YES” answer when they propose the project or they request a new functionality.

Did it ever happen to you? You propose a project for a budget, let’s say of 30K a month, the Stakeholders are trilled and excited, every body approve your project, but usually for a third of the budget … Wow, how can we fix this now? They want the functionality, they want us to implement it but with half of the planned resources …

In this case is not enough to show to your customers what are the steps required for a task, you need to start to talk also about resources and time. If you can prove how long it takes a piece of functionality and how much will cost the person involved in the project, you can then easily come up with a formula like this one:

Cost = Time * DeveloperCostPerHour

And I usually implement this concept on top of my backlog, like the following picture:

Screen Shot 2013 07 25 at 2 05 31 PM

Ok this is not a Magic Wand but at least you can show to the Ferrari Buyer that the optionals are not coming for free!

The Tetris Puzzle

Ok, this one has happened to all of us at least once, I can’t believe you never had to code a new functionality into an existing mess (ops), into an existing application, with some crazy acceptance criteria.

Let’s make an example. A while ago I had to work with an existing Windows Form + C# platform, used to generate mathematical results after the execution of a long running task. The long running task was executed outside the context of the app, it was on a different tier (Database tier).
The application had a major issue, the entire User iInterface was synchronous, so every time a request was made (Data, Commands), the User had to wait until the User Interface was free.
You would say, what a pain …
Btw, the request from the Product Owner was to make the User Interface asynchronous, using less code as possible, in the minimum achievable and high quality level possible amount of time.

We analysed the application and we discovered that there were a lot of Views, User Controls and customisation from third party libraries (like Telerik, DevExpress …) that required a complete refactor process because they were not able to receive asynchronous data properly, without raising Invalid thread exceptions here and there.

Well, at the end it was hard to convince the PO about a massive refactor, but we didn’t really gave him a second choice, and this is really the point. If you give them a second cheapest choice, they will always choose that one and you will be stuck in the middle not able to say NO and not able to finish in time.

Well I hope you enjoyed my rant

Raffaeu

Software Architecture and Agile

An answer for people who “make mountains out of molehills“.

First of all, what is Software Architecture and what is Agile. I need to provide a short and summary definition of both terms in order to avoid “Barking up the wrong tree“.

The term software architecture intuitively denotes the high level structures of a software system. It can be defined as the set of structures needed to reason about the software system, which comprise the software elements, the relations between them, and the properties of both elements and relations. The term software architecture also denotes the set of practices used to select, define or design a software architecture. Finally, the term often denotes the documentation of a system’s “software architecture”. Documenting software architecture facilitates communication between stakeholders, captures early decisions about the high-level design, and allows reuse of design components between projects.

Agile software development is a group of software development methods based on iterative and incremental development, where requirements and solutions evolve through collaboration between self-organizing, cross-functional teams. It promotes adaptive planning, evolutionary development and delivery, a time-boxed iterative approach, and encourages rapid and flexible response to change. It is a conceptual framework that promotes foreseen interactions throughout the development cycle. The Agile Manifesto introduced the term in 2001.

The first term refers to Software Architecture and the second one to Agile Development. I guess that before even starting to bark against each other, we should have a look at the key terms that represent each topic and see if there are any terms that clash between each other. So, where Software Architecture and Agile clashes?

Software Architecture Agile Development
Design set of practices used to select, define or design a software architecture requirements and solutions evolve through collaboration between self-organizing, cross-functional teams

Here I don’t see any problem with these terms. Even if your architecture is dynamic, which is quite common, you still need to decide and define the architectures and patterns that you are planning to use. For example, you should specify the SOLID principles, setup the Continuous Integration tool, define the Development process, for example SCRUM, and so on …

In Software Architecture, what has been mentioned in the previous paragraph is called planning and there is nothing wrong with that, even in an Agile environment. In fact, right now I work in a successful Agile company and we have a quite heavy planning system, which works pretty well.

If you are working in a real Agile team, you will probably have: standup meetings, grooming, plan-board, you name it … Those are all processes that requires at least a piece of paper in your hands, and usually Software Architecture blends quite well into this mechanism. It brings tools like: ViewPoints, Diagrams, Process description. They are visual tools, very useful to keep the communication between Stakeholders, Product Owners and Developers clear and constant.

Software Architecture Agile Development
Decision and management The term software architecture also denotes the set of practices used to select, define or design a software architecture evolutionary development and delivery, a time-boxed iterative approach, and encourages rapid and flexible response to change

Agile development requires an adaptive and flexible attitude, you need to be ready for changes and be responsive. How can you be ready for changes and responsive? Well, probably by learning new techniques, experimenting new patterns and new frameworks, reading books and working on different projects and teams.

That’s probably the easiest way to be adaptive and ready for changes. What’s the problem with Software Architecture? None. An Agile Architect, because that’s what we are talking about, will simply document and mentor the teams about new technologies and frameworks and keep their technical motivation high and productive. The urban legend that a Software Architect will take all the decisions upfront it’s related only to some architects that work on waterfall projects, but of course it does not and it cannot apply to an Agile Architect.

The little different between a geeky developer and an architect stays in the experience. Usually an architect will suggest the cheapest, fastest and more maintainable solution for the company (customer) while a geeky developer will always try to experiment new techniques and frameworks that may be only the latest trend or the latest fashion. You don’t believe so? Look at the current development platforms. We moved back to 10 years ago architectures where we have simple HTML clients and rich business models behind a server side technology. So, that’s it? What happened then to those architects that didn’t move to the fancy RIA platforms? They were right, that’s it. 😐

Anyway, if you want to read more about how software architecture blends into Agile development, these are some useful links: