Month: July 2013

BDD frameworks for NET

When I work in a project that implies Business Logic and Behaviors I usually prefer to write my unit tests using a Behavioral approach, because in this way I can easily projects my acceptance criteria into readable piece of code.

Right now I am already on Visual Studio 2013 and I am struggling a bit with the internal test runner UX, because it doesn’t really fit the behavioral frameworks I am used to work with.

So I decided to try out some behavioral frameworks with Visual Studio 2013. I have a simple piece of functionality that I want to test. When you create a new Order object, if the Order Id is not provided with the right format, the class Order will throw an ArgumentException.

If I want to translate this into a more readable BDD sentence, I would say:

Given that an Order should be created
When I create a new Order
Then the Order should be created only if the Order Id has a valid format
Otherwise I should get an error\

So let’s see now the different ways of writing this test …

XBehave

XBehave is probably the first framework I tried for behavioral driven tests. The implementation is easy but in order to get it working properly you have to write your tests on top of xUnit test framework, and of course you have also to install the xUnit test runner plugin for Visual Studio.

In order to write a test, you need to provide the Given, When and Then implementations. Each step has a description and a delegate that contains the code to be tested. Finally, each method must be decorated with a [Scenario] attribute.

[Scenario]
[Example("ABC-12345-ABC")]
public void CreatingAnOrderWithValidData(string orderId, Order order)
{
    _
        .When("creating a new Order",
                   () => order = Order.Create(orderId))            
        .Then("the Order should be created",
                   () => order.Should().NotBeNull());
}

In this specific case I am using the [Example] attribute to provide dynamic data to my test, so that I can test multiple scenarios in one shot.

image

The problem? For each step of my test, Visual Studio test runner identifies a unique test, which is not really true and quite unreadable because this is one method, one test but in Visual Studio I see a test row for each delegate passed to XBehave.

NSpec

NSpec is another behavioral framework that allows you to write human readable tests with C#. The installation is easy, you just need to download the Nuget package, the only problem is the interaction with Visual Studio. In order to run the tests you need to call the test runner of NSpec, which is not integrated into the test runner of Visual Studio, so you need to read the test results on a console window, without the availability to interact with the failed tests.

In order to test your code you need to provide the Given,When and Then like with any other behavioral framework. The difference is that this framework is making an extensive use of magic strings and it assumes the developer will do the same.

void given_an_order_is_created_with_valid_data()
{
    before = () => order = Order.Create("xxx-11111-xxx");
    it["the order should be created"] = 
         () => order.should_not_be_null();
}

And this is the result you will get in Visual Studio Nuget package console (as you can see, not really intuitive and easy to manage):

image 

Honestly the test syntax is quite nice and readable but I found annoying and silly that I have to run my tests over a console screen, come’n the interaction with Visual Studio UI is quite well structured and this framework should at least prints the output of the tests into the test runner window.

StoryQ

I came to StoryQ just few days ago, I was googling about “fluent behavioral framework”. The project is still on www.codeplex.com but it looks pretty inactive, so take it as is. There are few bugs and code constraints that force you to implement a very thigh code syntax, thigh to the StoryQ conventions.

The syntax is absolutely fluent and it’s very readable, compared to other behavioral frameworks, I guess this is the nearest to the developer style.

new StoryQ.Story("Create Order")
.InOrderTo("Create a valid Order")
.AsA("User")
.IWant("A valid Order object")

.WithScenario("Valid order id")
    .Given(IHaveAValidOrderId)
        .And(OrderIsInitialized)
    .When(ANewOrderIsCreated)
    .Then(TheOrderShouldNotBeNull)

.ExecuteWithReport(MethodBase.GetCurrentMethod());

And the output is fully integrated into Visual Studio test runner, just include the previous code into a test method of your preferred unit test framework.

image

The output is exactly what we need, a clear BDD output style with a reference to each step status (succeed, failed, exception …)

BDDify

BBDify is the lightest framework of the one I tried out and honestly is also the most readable and easy to implement. You can decide to use the Fluent API or the standard conventions. With both implementations you can easily build a dictionary of steps that can be recycled over and over.

In my case I have created a simple Story in BDDfy and represented the story using C# and no magic strings:

[Fact]
public void OrderIsCreatedWithValidId()
{
    this.
        Given(s => s.OrderIdIsAvailable())
        .When(s => s.CreateANewOrder())
        .Then(s => s.OrderShouldNotBeNull())
        .BDDfy("Create a valid order");
}

And the test output really fit into Visual Studio test runner, plus I can also retrieve in the test output my User Story description, so that my QAs can easily interact with the tests results:

image

I really like this layout style because I can already picture this layout into my TFS reports.

I hope you enjoyed!

Where is the Magic wand?

Smiling Magic Wand

I am always wondering where I was the day they distributed the Magic Wand. I mean, every time I join a new Team/Project I always have to carry with me a set of tools that make my job easier. Some of these tools are within Visual Studio, some are pre-defined architecture diagrams that I created with Visio or Balsamiq, some are links to articles and books that I share with the teams.

Unfortunately, very often I see the demand of a Magic Wand, a tool that I don’t carry with me just because I simply don’t have it and probably I’ll never have!

Now let’s make the concept more clear, there are usually three different situations where you will be required the use of a Magic Wand:

  1. First, most common, the impossible timeline. You get a request from a Stakeholder or even a Product Owner to accomplish a task in a time that it’s human impossible
  2. Second the Ferrari buyer, you are required to design a functionality or a set of functionalities with a budget that is waaaaay to low than the minimum required
  3. Third the Tetris puzzle, you are required to add a functionality to an existing structure, but the existing structure does not allow you to implement the functionality and you don’t have time/resources/space to refactor the existing code

Of course there are a lot more situations where you are required to provide a Magic Wand, the three reasons mentioned previously are common in my job, and this is how I usually try to tackle them, even if this doesn’t mean that my solution is always the right one …

The impossible timeline

Screen Shot 2013 07 18 at 11 35 04 AM

You have a meeting with your Product Owner and you discover right away that he wants you to implement a very nice piece of functionality. It requires some refactoring of the current code, a bit of investigation on your side, and probably a couple of weeks between coding/testing and update the documentation. Wow, great, you know that you’ll be busy in the next months with something very cool so you are all trilled and start to discuss with the PO a draft of a backlog that you have created previously.

Right away you discover that your backlog is simply non achievable. You estimated three sprints for a total of almost two months of works while your PO has already said to the Stakeholders that it won’t take more than a couple of weeks!

In this case, the only thing that we can do is to draw the backlog, probably using a Story Map approach, and share the backlog with Stakeholders and POs together, in order to show what is the real amount of work required. I usually work with Balsamiq and I create story maps that look like the one on your left.

Using this approach you can clearly show to the Stakeholders that in order to make an Order, for example, you need to create few things like: infrastructure, HTML views, REST methods and so on. For each task you can clearly identify how long it will take and that will probably give to them a better picture of what’s needed to be done.

The Ferrari buyer

The second situation where I am usually forced to use a Magic Wand is when I encounter the Ferrari buyers. Why do I call them Ferrari buyer? Well because those type of Customers/Stakeholders or whatever you want to call them, they are usually looking for a Ferrari master piece but with the budget of an old Fiat. That’s why they struggle to find a “YES” answer when they propose the project or they request a new functionality.

Did it ever happen to you? You propose a project for a budget, let’s say of 30K a month, the Stakeholders are trilled and excited, every body approve your project, but usually for a third of the budget … Wow, how can we fix this now? They want the functionality, they want us to implement it but with half of the planned resources …

In this case is not enough to show to your customers what are the steps required for a task, you need to start to talk also about resources and time. If you can prove how long it takes a piece of functionality and how much will cost the person involved in the project, you can then easily come up with a formula like this one:

Cost = Time * DeveloperCostPerHour

And I usually implement this concept on top of my backlog, like the following picture:

Screen Shot 2013 07 25 at 2 05 31 PM

Ok this is not a Magic Wand but at least you can show to the Ferrari Buyer that the optionals are not coming for free!

The Tetris Puzzle

Ok, this one has happened to all of us at least once, I can’t believe you never had to code a new functionality into an existing mess (ops), into an existing application, with some crazy acceptance criteria.

Let’s make an example. A while ago I had to work with an existing Windows Form + C# platform, used to generate mathematical results after the execution of a long running task. The long running task was executed outside the context of the app, it was on a different tier (Database tier).
The application had a major issue, the entire User iInterface was synchronous, so every time a request was made (Data, Commands), the User had to wait until the User Interface was free.
You would say, what a pain …
Btw, the request from the Product Owner was to make the User Interface asynchronous, using less code as possible, in the minimum achievable and high quality level possible amount of time.

We analysed the application and we discovered that there were a lot of Views, User Controls and customisation from third party libraries (like Telerik, DevExpress …) that required a complete refactor process because they were not able to receive asynchronous data properly, without raising Invalid thread exceptions here and there.

Well, at the end it was hard to convince the PO about a massive refactor, but we didn’t really gave him a second choice, and this is really the point. If you give them a second cheapest choice, they will always choose that one and you will be stuck in the middle not able to say NO and not able to finish in time.

Well I hope you enjoyed my rant

Raffaeu

CQRS in brief

Around the web there is a lot of noise about “CQRS” and “Event sourcing”. Almost everybody involved in a layered application is now trying to see if CQRS can fit in its current platform.

I also found some nice tutorial series about CQRS and some good explanation of event sourcing but what I didn’t see yet is a nice architectural overview of these two techniques and when they are needed, what are the pros and the cons.

So let’s start as usual on my blog with a brief introduction to CQRS. In the next post I will show Event Sourcing.

CQRS, when and who?

Around the end of 2010, Greg Young, wrote a document about Command Query Responsibility Separation, available at this link. The documentation highlights the advantages of this pattern, the different concepts of command UX (task based UX), the CQRS pattern and the event sourcing mechanism. So, if you want to read the real original version of this architectural pattern you need to refer to the previous mentioned doc.

In the meantime also M. Fowler started to talk about it, with the post “CQRS”. Without forgetting the series of posts published by Udi Dahan (the creator of NServiceBus).

Last but not least, Microsoft patterns and practices created a nice series related to CQRS called CQRS Journey. It’s a full application with a companion book that can also be downloaded for free form MSDN. It shows what would really happen within your team when you will start to apply concepts like: bounded context, event sourcing, domain driven design and so on.

CQRS in brief

In brief CQRS says that we should have a layer in charge of handling the data requests (queries) and a layer in charge of handling the command requests (modification). The two layers should not be aware of each other and should/may return different objects, or better, the query layers should return serializable objects containing information that need to be read, while the command layer should receive serializable objects (commands) that contains the intention of the user.

Why? Because during the lifecycle of an application is quite common that the Logical Model become more sophisticated and structured, but this change may impact also the UX, which should be independent from the core system.
In order to have a scalable and easy to maintain application we need to reduce the constraints between the reading model and the writing model.

The picture below show a brief explanation of how a CQRS application should work (this is just one of the possible architecture and it’s not the silver bullet solution for CQRS):

Read Mechanism

image

We will use REST API to query the data because they provide a standard way of communication between different platforms. In our case the REST API will host something like the following API URLs:

Get Person api/Person/{id} api/Person/10
Get Persons api/Person api/Person?Name eq ’Raf’

These APIs don’t offer Write or Delete support because it will break the CQRS principles. They return single or collection of serializable objects (DTO), result of a query sent previously by the user.

Write Mechanism

image

For the write model we can use WCF, send a command message (see pattern here) and handle the command on our server side. The command will change the domain model and trigger a new message in the bus, a message that acknowledge everybody about the status changed. It will also update the write datastore with the new information.

What’s important is that the command is not anymore a CRUD operation but the intent of the user to act against the model.

How does it work?

The first question I try to answer as soon as I start to read about a new architectural pattern like CQRS is, “yes nice but how it is supposed to work?”. Usually I don’t want to read code to understand that, but I would like to see a nice diagram (UML or similar) that shows me the flow of the application through the layers, so that I can easily understand how it is supposed to be done.

I tried to apply this concept to CQRS and again, I split the diagram into read and write.

The read part

  • A User ask some data to the view
  • The view send an async request to the REST API
    • The REST API query the datastore and returns a result
    • The result is returned by the callback in the view
  • The User will see the data requested when the view refresh

image

The write part

  • A User send a command from the UI (save, change, order …) containing the intention of the user
    Usually asynchronous
  • The command is sent using JSON to a WCF endpoint (one of the 1,000 possible solutions to send a serializable command) and return an acknowledgment
    • The WCF endpoint has a command handler that will execute the command
    • The command changes the domain which
      • Raise an event
      • Save it’s state on the database
  • The service bus receive the events and acknowledge the environment
    (for instance, here you can have event sourcing that listen to the event)
  • The user interface is updated by the events and the acknowledgments

image

As you can see, Domain Model, Event Sourcing, Data Mapping and Service Bus are optional and additional architectural patterns that can be merged with CQRS, but not part of it.

With CQRS your primary concern is to loosely couple the reads from the writes.

CQRS Pros and Cons

Ok, this architectural pattern (there are different opinions here …) it’s quite helpful but there are some cons that we should keep in consideration when we want to adopt CQRS.

When not to use it?

  • You have a simple (no complex logic) application that executes CRUD, in this case you can easily map the domain with an ORM an provide DTO back/forward to your client tier. Even the UX can reflect the CRUD style.
  • Your concern is not about performances and scalability. If your application will not be on a Saas platform, but only internal to a company, probably CQRS will over engineer the whole architecture, especially if the user interface is complex and data-entry based.
  • You can’t create a task based UI and your customer doesn’t have yet the culture for a custom task based application.

Also, remember that CQRS is just a portion of your architecture, the one that take care to expose data and/or receive commands. You still have to deal with the data layer, your domain or your workflows, your client logic and validation and more.

If you have already analysed those concepts and you feel confident about your architecture, then you may consider to see how it will fit CQRS inside it, or you can simply start by splitting the reading endpoints of your API from your writing endpoints. Especially because right now the primary requirement for a modern application is to expose data through a simple and standardize channel like REST, so that the integration with external platform it’s easier and more standardized.

Alternative?

There are a couple of alternatives that we can keep in consideration if we don’t want to face the complexity of a CQRS solution. It really depends on what type of User Experience we need to provide.

Data entry application

In this case it’s quite easy to keep going with a solution similar to the classic CRUD applications, maybe a sort of master/details approach where a View is mapped to a Domain Entity which is bind to the database using an ORM.

This is a classic, very efficient and productive way of creating a CURD application with the unnecessary over engineering SOA on top of it. We don’t need DTO and probably we will just have a data-layer that takes care of everything.

image

SOA platform or API public

Sometimes we may need to provide access to the system through a public API, at the moment REST seems the most feasible and globalized way. In this case we may still have a Domain and an ORM behind the hood but we can’t directly expose those to the web. So? So we need a DTO and a communication channel where we can expose those simple objects in order to send and receive data in a standardized way.

In this case we are starting to face more complexity due to the SOA constraints:

  • Reuse, interoperability and modularity
  • Standard compliance
  • Service identification, classification and categorization
  • more …

image

In this case we can have a client app that works using ViewModels bind to the DTO exposed by our Web API, but we are still not using CQRS at all because we don’t distinguish between data and intention of the user.

I hope that my two notes can give you a bit more clear picture of what is CQRS and what is supposed to be from my point of view, of course! Smile