Category: Design Pattern

Do you Test your applications?

When I talk about test with a collegue, a tech friend or anybody involved in Software Development, the conversation always ends up with a comparison between Unit Tests and Integration Tests. As far as I am aware of, I know four type of developers, the one that practices unit tests, the once that practices integration tests, the one that practices both and the one that doesn’t practice any test at all …

In my opinion just the fact that we compare the two techniques means that we don’t really apply the concept of test in our applications. Why? To make it more clear, let’s have a look at what is a Unit Test and what is an Integration Test, but also what is a Validation Test.

Unit Test
From Wikipedia: A unit test is a test for a specific unit of source code, it is in charge of proving that the specific unit of source code is fit for the use.

Integration Test
From Wikipedia: An integration test is a test that proves that multiple modules, covered by unit tests, can work together in integration.

Validation Test
From Wikipedia: A validation test is a test that proves that the entire application is able to satisfy the requirements and specifications provided by the entity who requested the application (customer).

We have three different type of tests for an application. They are quite different and indipendent from each other and in my opinion you can’t say an application “is working” only because you achieved 100% code coverage or because you have a nice set of Behavioral tests. You can say that your application is tested if:

The modules are created using TDD so that we know the code is 100% covered, then those modules are tested to work together in a “real environment, finally we should also verify that our result is the one expected by the customer.

An architecture to be tested

As a Software Architect I need to be sure that the entire architecture I am in charge of, is tested. This means that somehow, depending on the team/technology I am working with, I need to find a solution to test (unit, integration and validation) the final result and feel confident.

What do I mean with feel confident? I mean, I should be confident when the build is green that I can go to my Product Owner or Product Manager and says “Hey, a new release is available!”.

So, as usual for my posts, we need a sample architecture to continue the conversation.


Here we have a classic multi-layer architecture, where we have the data, the data acces, the business logic and the presentation layer, logically separated. We can consider each layer a separate set of modules that need to speak to each other.

How do we test this?
In this case I use a specific pattern that I developed during the years. First, each piece of code requires a unit test, so that I can achieve the first test requirement, the unit test requirement. Then I need to create a test environment where I can run multiple modules together, to test the integration between them. When I am satisfied with these two typologies of tests, I need a QA that will verify the final result and achieve also the validation test.

Below I have just created an example of the different type of tests that I may come up with in order to be able to test the previosuly described architecture.

Unit Tests

First of all, the unit test. The unit test should be spreaded everywhere and achieve always the 100% code coverage. Why? Very simple, the concept behind TDD is to “write a test before write or modify a piece of code”; so if you code in this way, you will always have 100% code coverage … In the same way if you don’t achieve 100% code coverage, you are probably doing something wrong

Considering now our data layer, for example, I should have the following structure into my application. A module per context with a correspondent test module :


This is enough to achieve 100% code coverage and comply to the first test category: unit test.

Integration Tests

Now that the functionalities are implemented and covered by a bunch of unit tests, I need to design and implement a test environment, an environment where I can deploy my code and test it in integration mode.

The schema below shows how I achieve this using Microsoft NET and the Windows platform. I am quite sure you can achieve the same result on a Unix/Linux platform and on other systems too.


In my case this is the sequence of actions triggered by a check-in:

  • The code is sent to a repository (TFS, Git, …)
  • The code is build and:
    • Unit tested
    • Code covered analysis
    • Code quality and style analysis
  • Then the code is deployed to a test machine
    • The database is deployed for test
    • The web tier is deployed for test
    • The integration tests are executed and analyzed

This set of actions can make my build green, red or partial green. From here I can decide if the code needs additional reviews or it can be deployed to a staging environment. This type of approach is able to prove that the integration between my modules is working properly.

Finally, validate everything with someone else

Now, if I got lucky enough I should have a green build, that in my specific case it is deployed in automation to a repository available on the cloud. In my specific case I use TFS Preview service and for the virtual machines I am using Azure Virtual Machines. Everything is deployed using some automation tools like Octopus Deploy, which allows me to automatically scripts all the steps required for my integration tests and for my deployment steps.

The same tool is used by the Product Owner to automate the QA, Staging and Production deployment of the final product. The interface allows you to select the version and the environment that needs to be upgraded or downgraded:

I can choose to “go live”, “go live and test” and many more options, without the needs to manually interact with the deployment/test process.

So, if you are still following me, the real concept of test is expressed only when you can easily achieve all the three phases of the test process: unit, integration and validation. The picture below represents the idea:


Hope it makes sense.

Domain Model in Brief

This series is getting quite interesting so I decided to post something about Domain Model. Just a little introduction to understand what is and what is not Domain Model.

Honestly this pattern is often misunderstood. I see tons of examples of anemic models that reflect 1:1 the database structure, mirroring the classic Active Record pattern. But Domain Model is more than that …

Let’s start as usual with a definition, and we take this definition from one of the founder of the Domain Model pattern, Martin Fowler:

An object model of the domain that incorporates both behavior and data.

But unfortunately I almost never see the first part (behaviors). It’s easy to create an object graph, completely anemic, and then hydrate the graph using an ORM that retrieves the data from the database; but if we skip the logical part of the object graph, we are still far from having a correct Domain Model.

This means that we can have anemic objects that represent our data structure, but we can’t say that we are using Domain Model.

Our Domain Model

In order to better understand the Domain Model pattern we need a sample Domain, something that aggregates together multiple objects with a common scope.

In my case, I have an Order Tracking System story that I want to implement using the Domain Model pattern. My story has a main requirement:

As a Customer I want to register my Order Number and e-mail, in order to receive updates about the Status of my Order

So, let’s draw few acceptance criteria to register an Order:

  • An Order can be created only by a user of type Administrator and should have a valid Order id
  • An Order can be changed only by a user of type Administrator
  • Any user can query the status of an Order
  • Any user can subscribe and receive updates about Order Status changes using an e-mail address

If we want to represent this Epic using a Use Case diagram, we will probably end up with something like this:


Great, now that we have the specifications, we can open Visual Studio (well I did it previously when I created the diagram …) and start to code. Or better, start to make more investigations about our Domain objects.

It’s always better to have an iteration 0 in DDD when you start to discover the Domain and the requirements together with your team. I usually discover my Domain using mockups like the following one, where I share ideas and concepts in a fancy way.


Create a new Order

An Agent can created an Order and the Order should have a proper Order Id. The order id should reflect some business rules so it’s ok to have this piece of validation logic inside our domain object. We can also say that the Order Id is a requirement for the Order object because we can’t create an Order object without passing a valid Order Id. So, it makes absolutely sense to encapsulate this concept into the Order entity.

A simple test to cover this scenario would be something like this:

public void OrderInitialStateIsCreated()
        .Given(s => s.GivenThatAnOrderIdIsAvailable())
        .When(s => s.WhenCreateANewOrder())
        .Then(s => s.ThenTheOrderShouldNotBeNull())
            .And(s => s.TheOrderStateShouldBe(OrderState.Created))
        .BDDfy("Set Order initial Status");


If you don’t know what [Fact] is, it is used by xUnit, an alternative test framework that can run within Visual Studio 2013 test runner using the available nuget test runner extension package.

From now on, we have our first Domain Entity that represents the concept of Order. This entity will be my Aggregate root, an entity that bounds together multiple objects of my Domain, in charge of guarantee consistency of changes made to those objects.

Domain Events

M. Fowler defines a Domain event in the following way:

A Domain event is an event that captures things in charge of changing the state of your application

Now, if we change the Order Status we want to be sure that an event is fired by the Order object, which inform us about the Status change. Of course this event will not be triggered when we create a new Order object. The event should contains the Order Id and the new Status. In this way we will have the key information for our domain object and we may not be required to repopulate the object from the database.

public void SetState(OrderState orderState)
    state = orderState;
    if(OrderStateChanged != null)
        OrderStateChanged(new OrderStateChangedArgs(this.orderId, this.state));

Using the Domain Event I can easily track the changes that affect my Order Status and rebuild the status in a specific point in time (I may be required to investigate the order). In the same time I can easily verify the current status of my Order by retrieving the latest event triggered by the Order object.

The ubiquitous language

With my Order object create, I need now to verify that the behaviors assigned to it are correct, and the only way to verify that is to contact a Domain expert, somebody expert in the field of Order Tracking System. Probably witht this person I will have to speak a common language, the ubiquitous language mentioned by Eric Evans.

With this language in place, I can easily write some Domain specifications that guarantee the correctness of the Domain logic, and let the Domain Expert verifies it.


This very verbose test, is still a unit test and it runs in memory. So I can easily introduce BDD into my application without the requirement of having an heavy test fitness behind my code. BDDfy allows me to produce also a nice documentation for the QA, in order to analyze the logical paths required by the acceptance criteria.

Plus, I am not working anymore with mechanical code but I am building my own language that I can share with my team and with the analysts.

Only an Administrator can Create and Change an Order

Second requirement, now that we know how to create an Order and what happen when somebody changes the Order Status, we can think of creating an object of type User and distinguish between a normal user and an administrator. We need to keep this concept again inside the Domain graph. Why? Well, very simple, because we choose to work with the Domain Model pattern, so we need to keep logic, behaviors and data within the same object graph.

So, the requirements are the following:

  • When we create an Order we need to know who you are
  • When we change the Order status, we need to know who you are

In Domain Driven Design we need to give responsibility of this action to somebody, that’s overall the logic that you have to apply when designing a Domain Model. Identify the responsibility and the object in charge of it.

In our case we can say to the Order object that, in order to be created, it requires an object of type User and verify that the user passed is of type Administrator. This is one possible option, another one could be to involve an external domain service, but in my opinion is not a crime if the Order object is aware of the concept being an administrator.

So below are my refactored Behavior tests:


The pre-conditions raised in the previous tests:

  • order id is valid
  • user not null
  • user is an administrator

are inside my Order object constructor, because in DDD and more precisely in my Use Case, it doesn’t make any sense to have an invalid Order object. So the Order object is responsible to verify the data provided in the constructor is valid.

private Order(string orderId, User user)
       .Contains("-","The Order Id has invalid format");
       .IsNotNull("The User is null");
       .IsTrue("The User is not Administrator");

    this.orderId = orderId;

Note: for my assertions I am using a nice project called Conditions that allows you to write this syntax.

Every time I have an Order object in my hands I know already that is valid, because I can’t create an invalid Order object.

Register for Updates

Now that we know how to create an Order we need to wrap the event logic into a more sophisticated object. We should have a sort of Event broker also known as Message broker able to monitor for events globally.

Why? Because I can imagine that in my CQRS architecture I will have a process manager that will receive commands and execute them in sequence; while the commands will execute the process manager will also interact with the events raised by the Domain Models involved in the process.

I followed the article of Udi Dahan available here and I found a nice and brilliant solution of creating objects that act like Domain Events.

The final result is something like this:

public void SetState(OrderState orderState)
    state = orderState;
        new OrderStateChangedEvent(this.orderId, this.state));

The DomainEvents component is a global component that use an IoC container in order to “resolve” the list of subscribers to a specific event.

Next problem, notify by e-mail

When the Order Status changes, we should persist the change somewhere and we should also notify the environment, probably using a Message Broker.

We have multiple options here, I can just picture some of them, for example:

  • We can associate an Action<T> to our event, and raise the action that call an E-mail service
  • We can create a command handler in a different layer that will send an E-mail
  • We can create a command handler in a different layer that will send a Message to a Queue, this Queue will redirect the Message to an E-mail service

The first two options are easier and synchronous, while the third one would be a more enterprise solution.

The point is that we should decide who is responsible to send the e-mail and if the Domain Model should be aware of this requirement.

In my opinion, in our specific case, we have an explicit requirement, whatever there is a subscribed user, we should notify. Now, if we are smart we can say it should notify to a service and keep the Domain Model loosely coupled from the E-mail concept.

So we need to provide a mechanism to allow a User to register for an Update and verify that the User receives an E-mail.


I just need a way to provide an E-mail service, a temporary one, that I will implement later when I will be done with my domain. In this case Mocking is probably the best option here, and because my DomainEvents manager is providing methods using C# generics, I can easily fake any event handler that I want:

var handlerMock = 
      .Verify(x => x.Handle

Now, if you think for a second, the IHandles interface could be any contract:

  • OrderStateChangedEmailHandler
  • OrderStateChangedPersistHandler
  • OrderStateChangedBusHandler

Probably you will have an Infrastructure service that will provide a specific handler, a data layer that will provide another handler and so on. The business logic stays in the Domain Model and the event implementation is completely loosely coupled from the Domain. Every time the Domain raise an event, then the infrastructure will catch it and broadcast it to the appropriate handler(s).


The sample shown previously is a very simple object graph (1 object) that provides behaviors and data. It fits perfectly into BDD because the business logic is behavior driven thanks to the ubiquitous language and it does not involve any external dependency (services or persistence).

We can test the logic of our Domain in complete isolation without having the disadvantage of an anemic Domain that does not carry any “real” business value. We built a language that can be shared between the team and the analysts and we have tests that speak.

Probably an additional functionality could be to design the Commands in charge of capture the intent of the user and start to test the complete behavior. So that we will have an infrastructure where the system intercepts commands generated by the User and use these command to interact with the Domain Model.

Again, even for Domain Model the rule is the same than any other pattern (architectural and not). It does not require a Data Layer, it does not require a Service Bus and so on. The Domain Model should contain behaviors and data, then the external dependencies are external dependencies …

Now, if you will do your homeworks properly, you should be able to have some good fitness around your Domain Model like the following one:


CQRS in brief

Around the web there is a lot of noise about “CQRS” and “Event sourcing”. Almost everybody involved in a layered application is now trying to see if CQRS can fit in its current platform.

I also found some nice tutorial series about CQRS and some good explanation of event sourcing but what I didn’t see yet is a nice architectural overview of these two techniques and when they are needed, what are the pros and the cons.

So let’s start as usual on my blog with a brief introduction to CQRS. In the next post I will show Event Sourcing.

CQRS, when and who?

Around the end of 2010, Greg Young, wrote a document about Command Query Responsibility Separation, available at this link. The documentation highlights the advantages of this pattern, the different concepts of command UX (task based UX), the CQRS pattern and the event sourcing mechanism. So, if you want to read the real original version of this architectural pattern you need to refer to the previous mentioned doc.

In the meantime also M. Fowler started to talk about it, with the post “CQRS”. Without forgetting the series of posts published by Udi Dahan (the creator of NServiceBus).

Last but not least, Microsoft patterns and practices created a nice series related to CQRS called CQRS Journey. It’s a full application with a companion book that can also be downloaded for free form MSDN. It shows what would really happen within your team when you will start to apply concepts like: bounded context, event sourcing, domain driven design and so on.

CQRS in brief

In brief CQRS says that we should have a layer in charge of handling the data requests (queries) and a layer in charge of handling the command requests (modification). The two layers should not be aware of each other and should/may return different objects, or better, the query layers should return serializable objects containing information that need to be read, while the command layer should receive serializable objects (commands) that contains the intention of the user.

Why? Because during the lifecycle of an application is quite common that the Logical Model become more sophisticated and structured, but this change may impact also the UX, which should be independent from the core system.
In order to have a scalable and easy to maintain application we need to reduce the constraints between the reading model and the writing model.

The picture below show a brief explanation of how a CQRS application should work (this is just one of the possible architecture and it’s not the silver bullet solution for CQRS):

Read Mechanism


We will use REST API to query the data because they provide a standard way of communication between different platforms. In our case the REST API will host something like the following API URLs:

Get Person api/Person/{id} api/Person/10
Get Persons api/Person api/Person?Name eq ’Raf’

These APIs don’t offer Write or Delete support because it will break the CQRS principles. They return single or collection of serializable objects (DTO), result of a query sent previously by the user.

Write Mechanism


For the write model we can use WCF, send a command message (see pattern here) and handle the command on our server side. The command will change the domain model and trigger a new message in the bus, a message that acknowledge everybody about the status changed. It will also update the write datastore with the new information.

What’s important is that the command is not anymore a CRUD operation but the intent of the user to act against the model.

How does it work?

The first question I try to answer as soon as I start to read about a new architectural pattern like CQRS is, “yes nice but how it is supposed to work?”. Usually I don’t want to read code to understand that, but I would like to see a nice diagram (UML or similar) that shows me the flow of the application through the layers, so that I can easily understand how it is supposed to be done.

I tried to apply this concept to CQRS and again, I split the diagram into read and write.

The read part

  • A User ask some data to the view
  • The view send an async request to the REST API
    • The REST API query the datastore and returns a result
    • The result is returned by the callback in the view
  • The User will see the data requested when the view refresh


The write part

  • A User send a command from the UI (save, change, order …) containing the intention of the user
    Usually asynchronous
  • The command is sent using JSON to a WCF endpoint (one of the 1,000 possible solutions to send a serializable command) and return an acknowledgment
    • The WCF endpoint has a command handler that will execute the command
    • The command changes the domain which
      • Raise an event
      • Save it’s state on the database
  • The service bus receive the events and acknowledge the environment
    (for instance, here you can have event sourcing that listen to the event)
  • The user interface is updated by the events and the acknowledgments


As you can see, Domain Model, Event Sourcing, Data Mapping and Service Bus are optional and additional architectural patterns that can be merged with CQRS, but not part of it.

With CQRS your primary concern is to loosely couple the reads from the writes.

CQRS Pros and Cons

Ok, this architectural pattern (there are different opinions here …) it’s quite helpful but there are some cons that we should keep in consideration when we want to adopt CQRS.

When not to use it?

  • You have a simple (no complex logic) application that executes CRUD, in this case you can easily map the domain with an ORM an provide DTO back/forward to your client tier. Even the UX can reflect the CRUD style.
  • Your concern is not about performances and scalability. If your application will not be on a Saas platform, but only internal to a company, probably CQRS will over engineer the whole architecture, especially if the user interface is complex and data-entry based.
  • You can’t create a task based UI and your customer doesn’t have yet the culture for a custom task based application.

Also, remember that CQRS is just a portion of your architecture, the one that take care to expose data and/or receive commands. You still have to deal with the data layer, your domain or your workflows, your client logic and validation and more.

If you have already analysed those concepts and you feel confident about your architecture, then you may consider to see how it will fit CQRS inside it, or you can simply start by splitting the reading endpoints of your API from your writing endpoints. Especially because right now the primary requirement for a modern application is to expose data through a simple and standardize channel like REST, so that the integration with external platform it’s easier and more standardized.


There are a couple of alternatives that we can keep in consideration if we don’t want to face the complexity of a CQRS solution. It really depends on what type of User Experience we need to provide.

Data entry application

In this case it’s quite easy to keep going with a solution similar to the classic CRUD applications, maybe a sort of master/details approach where a View is mapped to a Domain Entity which is bind to the database using an ORM.

This is a classic, very efficient and productive way of creating a CURD application with the unnecessary over engineering SOA on top of it. We don’t need DTO and probably we will just have a data-layer that takes care of everything.


SOA platform or API public

Sometimes we may need to provide access to the system through a public API, at the moment REST seems the most feasible and globalized way. In this case we may still have a Domain and an ORM behind the hood but we can’t directly expose those to the web. So? So we need a DTO and a communication channel where we can expose those simple objects in order to send and receive data in a standardized way.

In this case we are starting to face more complexity due to the SOA constraints:

  • Reuse, interoperability and modularity
  • Standard compliance
  • Service identification, classification and categorization
  • more …


In this case we can have a client app that works using ViewModels bind to the DTO exposed by our Web API, but we are still not using CQRS at all because we don’t distinguish between data and intention of the user.

I hope that my two notes can give you a bit more clear picture of what is CQRS and what is supposed to be from my point of view, of course! Smile

Castle WCF Facility

In the previous post we saw how to provide Inversion of Control capabilities to a WCF service. That’s pretty cool but unfortunately it requires a big refactoring if you plan to apply the Dependency Inversion pattern on an existing WCF project.

Today we will see an interesting alternative provided by Castle project. If you are not aware of Castle, you can have a look at the community web site here. Castle provides a set of tools for NET, Active Record, Dynamic Proxy, Windsor and a set of facilities that you can easily plug into your code.

The demo project

Before starting to have  a look at Castle Wcf Facility we need a new WCF project and a WCF web site to host our service. The idea is to create an empty service and host it, then we will refactor the project to include Castle Windsor and WCF facility.

Let’s start by creating an IIS web site, a DNS redirect and a folder for our IIS web site. The final result will be like the following one:


Now we need to create the new project in Visual Studio. I have created a solution that contains 2 projects:

  • Empty WCF Service Library named “Service.Contracts
  • Empty WCF Service Library named “Service.Implementations
    • reference the project “Service.Contracts

And this would be the final result of this step:


We have just implemented the correct way of creating a WCF service implementation and a WCF service contract. The Client will have a reference to the contract assembly in order to be able to work with the service implementation, without the need of using a Proxy. This concept is outside the fact that I will use an ioc container, it’s just a lot easier than adding a “web service reference” in visual studio and carry tons of useless proxy classes in your project.

Hosting WCF service

The final step is to host this WCF Service inside an IIS ASP.NET or ASP.NET MVC website. I have create a new “web site application” and from the options I choose “WCF service application”. Then I select my local IIS for the development path, but this is up to you.


Now Visual Studio will ask you if you want to override the current web site or use the existing content. If you choose the second option you will have to provide also web.config configuration and initial setup, so I kindly suggest the first option.

If everything went ok you should be able to visit the default service created by Visual Studio at this url: http://staging.service.local/Service.svc.

Now we need to change the content and the name of the .svc file, but before we need to add a reference for Contracts and Implementations from the solution explorer!

[sourcecode language='php' ]
<%@ ServiceHost 
    Language="C#" Debug="true" 
    Service="Service.Implementations.ServiceImplementation" %>

We need to reference the Implementation because we don’t have a factory yet able to translate the IServiceContract into a ServiceImplementation class. So the next step is to install in our solution the WCF castle facility nuget package.


Add WCF facility

So far we didn’t need any external tool because we have an empty service created by default by Visual Studio with a dummy code, or probably during the tutorial you have already changed your service to so something more interesting. The next step is to add a facility in order to be able to change the default factory used by IIS to create an instance of our service.

The first step is to add a new global.asax file if you don’t have one already and plug the Windsor container inside the Application_Start event.
This is my global.asax file:

[sourcecode language='php' ]
<%@ Application Language="C#" Inherits="ServiceApplication" %>

Now, in the code behind file we want to create an instance of a new Windsor Container during the Application_Start event:

[sourcecode language='csharp' ]
public class ServiceApplication : HttpApplication
    public static WindsorContainer Container { get; set; }

    protected void Application_Start()

And this is the code to initialize a new container. In this code we need to pay attention in the way we specify the server name, because WCF facility uses the service attribute value to search inside the Windsor container. So, this is the code to register the service:

[sourcecode language='csharp' ]
private void BuildContainer()
    Container = new WindsorContainer();

And this is the change we need to apply to the Service.svc file:

[sourcecode language='php' ]
<%@ ServiceHost 
    Language="C#" Debug="true" 
    Factory="Castle.Facilities.WcfIntegration.DefaultServiceHostFactory, Castle.Facilities.WcfIntegration" %>

Now your service is ready to use dependency injection under the power of Castle Windsor.

Add a dependency to a WCF service using Windsor

Simple example, I have a service that requires an ILogger injected in it. I need to change the code for my service to allow injection and then I need to register the logger in the global.asax file during the Windsor container activation process.

[sourcecode language='csharp' ]
public class ServiceImplementation : IServiceContract
    private readonly ILogger logger;

    public ServiceImplementation(ILogger logger)
        this.logger = logger;

    public string GetData(int value)
        this.logger.Trace("Method started!");
        return string.Format("You entered: {0}", value);

And this how we change the registration in the container:

[sourcecode language='csharp' ]
    // register the logger
    // register the service

The source code is available here:

WCF and dependency inversion using Castle Windsor

When you start to work with TDD and other Agile practices the first thing you try to apply almost everywhere in your code is the Dependency Inversion pattern because you want to invert the logic of your dependencies to make your code more testable and maintainable.

In order to accomplish this result I am using Castle Windsor, while before 2013 I was working with Microsoft Unity. I am not going to jump into the conversation “Which Inversion of Control container is better?” because it really depends on what you need to do. For years I was more than satisfied with Unity but recently I started to create complex bootstrappers for my applications and I found Windsor more flexible, that’s it! Nerd smile

How does WCF create a service instance?

First of all, in order to apply dependency inversion inside WCF we need to understand how WCF works and how it creates new service instance. I found this article on MSDN very helpful during my researches: Extending WCF.

So, this is the final result I want to obtain:


When you create a new service in WCF, you can specify the factory that will be in charge of creating the service instance.

This is the code you should use to specify the factory:

   1: <%@ ServiceHost Language="C#"

   2:     Factory="Raf.DependencyInjectionHostFactory"

   3:     Service="Raf.MyService" %>

Now, the factory should start a new instance of our service, or better, should use the ioc container to resolve an instance of that service.

Second step is to create a custom host that will add a new behaviour to our service, the behaviour will call the service locator (I know it’s an anti-pattern …) to inject the dependencies. In my case, Castle Windsor.


   1: public class DependencyInjectionHostFactory : ServiceHostFactory

   2: {

   3:     protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses)

   4:     {

   5:         // method called when we hit a request to http:// ... youservice.svc

   6:         return new DependencyInjectionHost(serviceType, baseAddresses);

   7:     }

   8: }


   1: public class DependencyInjectionHost : ServiceHost

   2: {

   3:     // omit

   4:     // ...

   5:     protected override void OnOpening()

   6:     {

   7:         // attach the behavior

   8:         Description.Behaviors.Add(new DependencyInjectionServiceBehavior());

   9:         base.OnOpening();

  10:     }

  11: }

Service behavior

   1: public class DependencyInjectionServiceBehavior : IServiceBehavior

   2: {

   3:     #region IServiceBehavior Members


   5:     public void Validate(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase)

   6:     {

   7:     }


   9:     public void AddBindingParameters(

  10:         ServiceDescription serviceDescription,

  11:         ServiceHostBase serviceHostBase,

  12:         Collection<ServiceEndpoint> endpoints,

  13:         BindingParameterCollection bindingParameters)

  14:     {

  15:     }



  18:     public void ApplyDispatchBehavior(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase)

  19:     {

  20:         foreach (ChannelDispatcherBase cdb in serviceHostBase.ChannelDispatchers)

  21:         {

  22:             var cd = cdb as ChannelDispatcher;

  23:             if (cd != null)

  24:             {

  25:                 foreach (EndpointDispatcher ed in cd.Endpoints)

  26:                 {

  27:                     ed.DispatchRuntime.InstanceProvider =

  28:                         new DependencyInjectionInstanceProvider(serviceDescription.ServiceType);

  29:                 }

  30:             }

  31:         }

  32:     }


  34:     #endregion

  35: }

So, the instance provider is able to retrieve a registration for our service using the service locator:

   1: public class DependencyInjectionInstanceProvider : IInstanceProvider

   2: {

   3:     private readonly Type serviceType;


   5:     public DependencyInjectionInstanceProvider(Type serviceType)

   6:     {

   7:         this.serviceType = serviceType;

   8:     }


  10:     public object GetInstance(InstanceContext instanceContext, Message message)

  11:     {

  12:         IServiceLocator serviceLocator = ServiceLocator.Current;


  14:         return serviceLocator.GetInstance(serviceType);

  15:     }


  17:     public void ReleaseInstance(InstanceContext instanceContext, object instance)

  18:     {

  19:         // if you container allows you to destroy instance

  20:         // you can destroy your service instance here

  21:     }


  23:     #endregion

  24: }

That’s it. Of course my container needs to know what is my service registration, and with Windsor I have two options:

   1: // option one, register the interface

   2: Component

   3:     .For<IWriteService>()

   4:     .ImplementedBy<WriteService>()


   6: // option two register the class

   7: Component

   8:     .For<WriteService>()

   9:     .ImplementedBy<WriteService>()

And of course you need to change the markup of your .svc file depending on how you want to resolve the service, with the interface or with the  class type.

Dependency Injection

Now, let’s assume that my service needs an IUnitOfWork contract, how should we do that?

First of all we need to register both types in our container, and with Windsor one of the possible options is this one:

   1: Component.For<IUnitOfWork>()

   2:             .ImplementedBy<UnitOfWork>(),

   3: Component

   4:     .For<ICommandService>()

   5:     .ImplementedBy<CommandService>(),

And my command service is constructed in this way:

   1: public class CommandService : ICommandService

   2: {

   3:     private readonly IUnitOfWork unitOfWork;


   5:     public CommandService(IUnitOfWork unitOfWork)

   6:     {

   7:         this.unitOfWork = unitOfWork;

   8:     }

   9: }

Note for purist of CQRS: I know that a command service should get a command dispatcher injected but for this blog post I guess it would not make things clear enough, so a unit of work or an IRepository would make more sense.

Agile Architecture.

Last week I presented a webinar about Agile Architecture and I got an unexpected positive feedback from it. First it was unexpected because I didn’t expect to get lot of people interested in this particular topic. Second it was unexpected because I am still at the beginning of my teaching career and presenting webinars so I am not so good yet to keep the audience interested for 60 min.

Btw, considering the result of the webinar, I decided to post the video of the webinar (thanks to the guys of Typemock!) and a short introduction to Agile Architecture.

Agile Architecture webinar:

Update: slides available on SlideShare.

Agile architecture

What is Agile Architecture? It is a methodology that follows the Agile guidelines but differ from some aspect of its process.

At this web address: you can find probably the official website of this Agile methodology, even if you may find a lot of interesting discussions and articles here and there on the web, just google “Agile Architecture”.

Another interesting web site is where you can find tons of information about Agile Modeling and Agile Architecture. (If you watch the webinar I spoke about both of them).

Finally, if you want just to read a quick article and understand the overall process of Agile Architecture, I found this article of J.D. Meier very helpful: “How to design with Agile Architecture”.

TFS Template

I got various requests about the TFS and Dashboard templates I used during the webinar. I got them from the beta of Team Foundation Server 2011, available here. You need to customize the process to fit better Agile Architecture methodology. I am planning to prepare a custom template for TFS 2011 as soon as the SDK will be finalized.

Q/A from the webinar

Below is a short list of some of the most interesting Questions and Answers I got during the webinar. I am posting them here because during the webinar I wasn’t able to read them because I am using the MAC version of GoToWebinar.

Q: Acceptance Tests should have been written before the model.

Not really, if you adopt the strict TDD all the acceptance tests should be written before, but when you envision the architecture, it is too early to write acceptance
tests. When you start the iteration and you have your requirements defined, you may start to model against the acceptance criteria for that iteration. But remember
that the modeling part should be light enough to provide value for the next step.

Q: the functions of the architect are clear, but it is not clear how these functions are intended to fit into the Sprints.

The sprint is composed by three parts, modeling, brainstorm and coding. The architect will perfectly fit into each of this phase. I think I have explained this
concept during the webinar, but just to be sure. During modeling and brainstorm the architect should help with his knowledge and collaborate in the modeling, during
the coding he should be an active coder too and contribute to the technical decisions providing knowledge and expertise.

Q: Can you elaborate on what you mean with the “design phase”?

Ok, during the first iteration, the envision, there is a light design phase. You should design some scratches of your envision, representing the architecture and the UI, but you should not invest too much time.
Design phase while modeling means adopting UML to design your models.

Q: How does your “Iter: 0” relate to the PO, backlog and sprint planning?

Ah ah, interesting. The envision or iteration 0 is tricky to fit into SCRUM. Take the classic meter to measure an iteration, like 1 or 2 weeks. Inside this timeframe you should have enough space and time to prepare the necessary analysis and mocks required by next steps. So you may consider iter 0 like a beginning sprint.

Q: Couldn’t the use-case diagram be enough for the modelling phase? I’m interested in hearing your reasoning because I’m currently in this situation in my current project. What level of detail is enough when it comes to architecture that lets developers use their brain and problem solving power?

No, the use case is not enough because it may incur into personal interpretation. With the use case and the acceptance you and your team are ready for the modeling phase, where you create the architectural style for that feature. The modeling should be enough to represent the current feature and it should not be implemented, the implementation occurs in the next phase where you apply TDD against the model discussed in the previous part. It is hard to keep the model the blueprint of the code and often it doesn’t provide value. It is important that the model represents what you are delivering.

Q: Does the Process workflow you describe impact the philosophy of embracing change? How can we perform radical change when we get a bigger up-front model?

No, it doesn’t because Agile Architecture pretends the flexibility into your model. Model just enough to represents the feature and remember that you can always come back and re-model and re-factor. Embracing Agile Architecture will force you to have a dynamic more agnostic and flexible model than using a classic architecture approach.
Why would you get a radical change request with a radical model upfront? Of you adopt Agile you should structure these changes into small iterations.

The envision phase is the most critical, you or the architect need to understand what the stakeholder wants from your team. If you get that, you are in a good starting position.
The big mistake is to take too much for the envision, which is not the modeling phase. Mocking out your application doesn’t mean to model everything, so the envision should take few days, at most one week. If it takes more you may have some smells in your team: lack of requirements, lack of business knowledge, lack of technical knowledge.
Remember that every iteration has a modeling phase and a brainstorm phase, so you may move back to the envision and adapt it to the new requirements.

Speaking about Agile Architecture

The 9th of May 2012 at 2 p.m. (GMT +1) I will speak about Agile Architecture. The webinar will be registered and hosted by Typemock.

This is the address to register:

Below is the Agenda and you are still in time to add some little changes:

What is Software Architecture
• What is Agile development
• How they can live together?
Discover principles of Agile architecture
• Lifecycle and process
• Modeling and development
• The right solution for the right problem
• How to deliver quality with (TDD)

If you want to listen about this topic or more feel free to drop me an e-mail (using my blog contact form) and I will be more than glad to expand it during the webinar. Of course it has to be something related to Agile Architecture … Winking smile

If you want to read more about it I would suggest you the following articles:

Agile Architect principles

J.D. Meier “Agile Architecture”

PS: I guess if we are numerous I may get a Typemock license available for free to one of the attendee. Winking smile

TypeMock tutorial #03. Control behaviors.

Note for purists: In this tutorial I am showing you a simplified example of a Unit of Work and a Repository, please do not care about the complexity or simplicity of these objects but look at the TypeMock implementations.

We are now at the third post of this series and I found out that there are a lot of readers interested in learning TypeMock, which means that series will have to continue!

Last time, we have created some object’s mocks using TypeMock but they were simple Value Objects with nothing or very few business logic in it. This time I want to show you how you can control the behaviors of a mock so that you do not have to control or fake the entire object if you are testing a single method.

Testing a IUnitOfWork

I do not know if you have already created a data layer in your career of software developer; if you did not, you can have a look at one of my tutorials or books about layering an application.

First of all we have a Unit of Work, which allows us to “Save”, “Update” or “Delete” the object we are passing it using a generic signature. The contract for a Unit of Work is represented by the following image:


As you can see we have a simple interface with three methods and we still do not have an implementation for it but we have some expectations that we would like to pre-test using a mock in order to be sure that the next step will be properly handled by TypeMock.

The pre-requisite is that every entity in our domain has some properties inherited by a base class DomainObject; these properties can tell us the ID of the entity, if the entity is new, modified or deleted.

The following object represents the base class for a domain entity.


The 3 properties are of type boolean while the UniqueId is of type Guid, so by default we will have a Guid.Empty value and after we mark dirty or updated the object we should have them populated.






Test the interface

If we test the interface we can start by writing three different expectations like the three following snippets:

Mark a new entity
  1. [Test]
  2. [Category(“DataLayer”)]
  3. public void CanMarkANewEntityToNewAndChangeItsId()
  4. {
  5.     Person person = Isolate.Fake.Instance<Person>();
  6.     IUnitOfWork uow = Isolate.Fake.Instance<IUnitOfWork>();
  7.     uow.MarkNew(person);
  8.     Assert.That(person, Has.Property(“UniqueId”).Not.EqualTo(Guid.Empty));
  9.     Assert.That(person, Has.Property(“IsNew”).True);
  10. }

And as soon as we run this test it simply fails because of course TypeMock is not able to properly mock the method MarkNew as we did not instruct it on how to do it …

The solution in this case is pretty straightforward, before invoking the MarkNew<T> method we need to teach to TypeMock what is our expectation for this method when we add a Person object to it.

  1. Isolate.WhenCalled(() =>
  2.     uow.MarkNew(person))
  3.     .DoInstead(callContext =>
  4.                    {
  5.                        var p = callContext.Parameters[0] as Person;
  6.                        p.UniqueId = Guid.NewGuid();
  7.                        p.IsNew = true;
  8.                        return p;
  9.                    });
  10. var expectedPerson = uow.MarkNew(person);

In this case we have informed TypeMock that when we will call the method MarkNew<T> passing as a generic paramenter the Person object, it will have to modify the person object and return it with a new ID and the IsNew property populated.

Another way to do that is to use the WillReturn method of TypeMock that can be used, like in this case when we have functions and not void methods.

  1. person.UniqueId = Guid.NewGuid();
  2. person.IsNew = true;
  3. Isolate.WhenCalled(() => uow.MarkNew<Person>(person)).WillReturn(person);
  4. var expectedPerson = uow.MarkNew(person);

In the same way we can test that the method may also return an unexpected exception, so we can inform TypeMock to force the mock interface to throw an exception.

This section of type mock is called Controlling method behaviors and you can find a detailed documentation about it at this address:

controlling methods

In the next tutorial we will see how to customize a chain of mockup object and faking the methods so that the IUnitOfWork will be used as a dependency for a Repository class.

If you want you can also download the code of every tutorial at this address on Codeplex:

Winking smile

Encrypted string of the week: IHcKGzESRVs=
using Blowfish CBC 64bit

TypeMock tutorial #02. Object creation.

In this new  part of the TypeMock series I am going to show you how to deal with objects and classes in general, how you can create them and what are (honestly aren’t) the limit of TypeMock on dealing with objects.

First of all I have just drawn down a little domain that I have added to the demo application. I am planning to upload this demo the next week on so that every geek reading this blog can just go there and download the source code.

The Demo domain

The domain is a very simple one, we have an abstract base class called Person, then we have two concrete classes, an Employee and a Customer that right now do not have any differences (we will see in the next tutorials why we have two different concrete classes) and then we have a value object Address that is composed only if we provide to it a parent Person object in the constructor. The Person entity exposes a read-only IEnumerable collection of Addresses, so in order to add or remove an address we must use the provided methods AddAddress and RemoveAddress.

The following picture shows the corresponding class diagram of this small domain.


These are the most important piece of code that you may be interested in:

Read-only collection Addresses
  1. private IList<Address> addresses = new List<Address>();
  3. public IEnumerable<Address> Addresses
  4. {
  5.     get { return this.addresses; }
  6. }
  8. public void AddAddress(Address address)
  9. {
  10.     if (this.addresses.Contains(address))
  11.     {
  12.         throw new InvalidOperationException(“The address is already in the collection.”);
  13.     }
  14.     this.addresses.Add(address);
  15. }
  17. public void RemoveAddress(Address address)
  18. {
  19.     if (!this.addresses.Contains(address))
  20.     {
  21.         throw new InvalidOperationException(“The address is not in the collection.”);
  22.     }
  23.     this.addresses.Add(address);
  24. }


Constructor of an Address obj
  1. public Address(Person person)
  2. {
  3.     Person = person;
  4. }
  6. public Person Person { get; private set; }

As you can see we have few things that need to be tested but in order to do that we have to create new instances of these objects in order to run our tests, which is pretty verbose and boring

Create the test project

The first step is to create a new Visual Studio 2010 Visual C# class library project and call it TypeMockDemo.Fixture and add the following references to it:


The references are pointing to:

  • my TDD framework nUnit (you can work with any TDD framework but I personally found nUnit to be the best out there …)
  • TypeMock assemblies, installed in the GAC of your machine
  • The TypeMockDemo project (the one we have the domain entities in)

Now we can start to create the first class fixture and verify that we can create a new Person, a new Employee and a new Customer. But hold on a second! How can we mock an abstract and two sealed class with a mocking framework? We simply can’t if we are not using TypeMock … Winking smile

Create new abstract and Sealed
  1. [Test]
  2. [Category(“Domain.Isolated”)]
  3. public void AssertThatCanCreateANewPerson()
  4. {
  5.     Person person = Isolate.Fake.Instance<Person>();
  6.     Assert.That(person, Is.Not.Null);
  7. }
  9. [Test]
  10. [Category(“Domain.Isolated”)]
  11. public void AssertThatCanCreateANewEmployee()
  12. {
  13.     Person person = Isolate.Fake.Instance<Employee>();
  14.     Assert.That(person, Is.Not.Null);
  15. }
  17. [Test]
  18. [Category(“Domain.Isolated”)]
  19. public void AssertThatCanCreateANewCustomer()
  20. {
  21.     Person person = Isolate.Fake.Instance<Customer>();
  22.     Assert.That(person, Is.Not.Null);
  23. }

Looking at the code we have introduced the new method Isolate.Fake.Instance<T> that is coming from TypeMock. With this method we can simply inform TypeMock that we want it will create for us a Proxy of the object we want to mock and it will return a derived class of the tested one, even if we are mocking a sealed class.

If the class is sealed TypeMock will create a new instance of the original object while if the object is abstract, TypeMock will create a proxy version of that object. Same thing will be done for all the child properties, complex or not …


That’s simply wow, we just used two lines of code to create a mockup and test it.

Now let’s move forward and let’s verify that we will not be able to add the same address twice and to remove the same address twice from a Person object.

Working with Instances or Proxy?

First of all we start to create this simple test but the result is not the one we expect …

Test a collection
  1. [Test]
  2. [Category(“Domain.Isolated”)]
  3. public void AssertThatCanAddTheSameAddressTwice()
  4. {
  5.     Person person = Isolate.Fake.Instance<Person>();
  6.     Address address = Isolate.Fake.Instance<Address>();
  7.     person.AddAddress(address);
  8.     Assert.That(person.Addresses.Count(), Is.EqualTo(1));
  9.     Assert.Throws<InvalidOperationException>(() => person.AddAddress(address));
  10. }

nUnit bombs saying that at line 8 the expected result is supposed to be 1 but in reality is 0. Why? This happens because TypeMock has created a full mockup proxy of the person object so also the methods AddAddress and RemoveAddress are mocks and they do not point to the real code we have implemented …

Control the creation
  1. [Test]
  2. [Category(“Domain.Isolated”)]
  3. public void AssertThatCanAddTheSameAddressTwice()
  4. {
  5.     Person person = Isolate.Fake.Instance<Person>(Members.CallOriginal, ConstructorWillBe.Called);
  6.     Address address = Isolate.Fake.Instance<Address>();
  7.     person.AddAddress(address);
  8.     Assert.That(person.Addresses.Count(), Is.EqualTo(1));
  9.     Assert.Throws<InvalidOperationException>(() => person.AddAddress(address));
  10. }

If we change the way TypeMock is creating the object Person, we can now say to it:

Dear TypeMock, I want that you create an instance of my object and that you call its constructor so that I can test the code I have implemented in this abstract class …

Et voila’, the test will pass! Same thing for the remove address and so on …

Now, the last test we may require is that we want to be sure that when we create a new address, the constructor is properly injecting the parent Person object so that we can keep a back-forward reference from the parent object and the collection of children.

Test injection in constructor
  1. [Test]
  2. [Category(“Domain.Isolated”)]
  3. public void AssertThatWhenCreateAnAddressTheParentPersonIsInjected()
  4. {
  5.     Person person = Isolate.Fake.Instance<Person>(Members.CallOriginal, ConstructorWillBe.Called);
  6.     Address address = Isolate.Fake.Instance<Address>();
  7.     Assert.That(person, Is.Not.Null);
  8.     Assert.That(address, Is.Not.Null);
  9.     Assert.That(address, Has.Property(“Person”).EqualTo(person));
  10. }

We run the test and kabum! It fails again. This time it fails on the address side because the Person instance TypeMock is injecting is not the same it returned to use in the previous line of code. So what can we do now?

We can manually create an Address and inject the parent Person but it sucks … or we can do this:

Customize constructor
  1. [Test]
  2. [Category(“Domain.Isolated”)]
  3. public void AssertThatWhenCreateAnAddressTheParentPersonIsInjected()
  4. {
  5.     Person person = Isolate.Fake.Instance<Person>(Members.CallOriginal, ConstructorWillBe.Called);
  6.     Address address = Isolate.Fake.Instance<Address>(Members.CallOriginal, ConstructorWillBe.Called, person);
  8.     Assert.That(person, Is.Not.Null);
  9.     Assert.That(address, Is.Not.Null);
  10.     Assert.That(address, Has.Property(“Person”).EqualTo(person));
  11. }


We simply inject the person value we want to use (the one created by TypeMock) because this is what it is going to happen with live code. What we care here is to be sure that inside the Address class constructor, the Person parameter is passed to the read-only property Person of the Address class, nothing more, nothing less!


As you can see, TypeMock is pretty cool and it allows you to control the way we can create and fake objects. Even if we use proxies we can still ask to TypeMock to create a real mock that reflect our code so that we can still test the business logic included in our objects without the need of creating complex objects manually.

If you want to read more about this topic I kindly suggest you this:

In the next tutorial we will see how to customize the methods and other behaviors of an object and I will also publish the first part of the quiz that will allow you to win almost 1,000 USD value of TypeMock license!

Stay tuned

New content for my Microsoft book

The Microsoft’s book I have published few months ago: “Building Enterprise Applications with WPF and MVVM” has been a success but I still got some negative feedbacks that me and my editor we want to get rid off.

This book is my first real book and of course it was my first experience on writing a book (for this reason we kept the price of the book very low). Anyway .. I got some bad feedbacks about missing parts, parts not explained as expected and a misunderstanding of the book audience and target.

I personally believe that the biggest problem is in the title of book, it drives you a little bit out of the topic of the book, if you buy this book you will believe to get the “bible to LOB applications with MVVM”, which is not.

For this reasons and also to keep high the audience of the book, we have decided to deliver for free new additional content for the book! Open-mouthed smile

The list of the new content is still under discussion with my editor but this is a rough list of the topics I will touch or expand in this new context that should be composed by 3 additional chapters!

  • Design patterns
    I am planning to add 20/30 additional pages on the second chapter in order to exhaustively cover everything related to the most known design patterns
  • Advanced MVVM
    I will add a new chapter where I will explain some “well known” problems you may find when the adoption of the MVVM pattern starts to get tricky!
  • Composite Frameworks for MVVM in practice
    In this chapter we will build the same Master-Detail UI logic using the three most famous frameworks for WPF/Silverlight: PRISM, Caliburn and Light Toolkit

If you believe that the book is still missing other information, feel free to send me an e-mail and I will be glad to discuss this with my editor.

Note: Remember also that we are planning to distribute the source code of the book as an Open Source project on before the end of the year.

Hope this will help!