Category: C#

Entity Framework 6 and Collections With DDD

If you start to work with Entity Framework 6 and a real Domain modeled following the SOLID principles and most common known rules of DDD (Domain Driven Design) you will also start to clash with some limits imposed by this ORM.

Let’s start with a classic example of a normal Entity that we define as UserRootAggregate. For this root aggregate we have defined some business rules as following:

  1. A User Entity is a root aggregate
  2. A User Entity can hold 0 or infinite amount of UserSettings objects
  3. A UserSetting can be created only within the context of a User root aggregate
  4. A UserSetting can be modified or deleted only within the context of a User root aggregate
  5. A UserSetting hold a reference to a parent User

Based on this normal DDD principles I will create the two following objects:

A User Entity is a root aggregate

/// My Root Aggregate
public class User : IRootAggregate
{
   public Guid Id { get; set; }

   /// A root aggregate can be created
   public User() {  }
}

A User Entity can hold 0 or infinite amount of UserSettings

public class User : IRootAggregate
{
   public Guid Id { get; set; }
   
   public virtual ICollection<UserSetting> Settings { get; set; }

   public User()
   {
      this.Settings = new HashSet<Setting>();
   }
}

A UserSetting can be created or modified or deleted only within the context of a User root aggregate

    public class UserSetting
    {
       public Guid Id { get; set; }
       public string Value { get; set; }
       public User User { get; set; }
    
       internal UserSetting(User user, string value)
       {
          this.Value = value;
          this.User = user;
       }
    }
    
    /// inside the User class
    public void CreateSetting(string value)
    {
       var setting = new UserSetting (this, value);
       this.Settings.Add(setting)
    }
    
    public void ModifySetting(Guid id, string value)
    {
       var setting = this.Settings.First(x => x.Id == id);
       setting.Value = value;
    }
    
    public void DeleteSetting(Guid id)
    {
       var setting = this.Settings.First(x => x.Id == id);
       this.Settings.Remove(setting);
    }

So far so good, Now, considering that we have a Foreign Key between the table UserSetting and the table User we can easily map the relationship with this class:

public class PersonSettingMap : EntityTypeConfiguration<PersonSetting>
{
   public PersonSettingMap()
   {
       HasRequired(x => x.User)
           .WithMany(x => x.Settings)
           .Map(cfg => cfg.MapKey("UserID"))
           .WillCascadeOnDelete(true);
   }
}

Now below I want to show you the strange behavior of Entity Framework 6.

If you Add a child object and save the context Entity Framework will properly generate the INSERT statement:

using (DbContext context = new DbContext)
{
   var user = context.Set<User>().First();
   user.CreateSetting("my value");

   context.SaveChanges();
}

If you try to UPDATE a child object, again EF is smart enough and will do the same UPDATE statement you would like to get issued:

using (DbContext context = new DbContext)
{
   var user = context.Set<User>()
                     .Include(x => x.Settings).First();
   var setting = user.Settings.First();
   setting.Value = "new value";

   context.SaveChanges();
}

The problem occurs with the DELETE. Actually you would issue this C# statement and think that Entity Framework like any other ORM does already, will be smart enough to issue the DELETE statement …

using (DbContext context = new DbContext)
{
   var user = context.Set<User>()
                     .Include(x => x.Settings).First();
   var setting = user.Settings.First();
   user.DeleteSetting(setting.Id);

   context.SaveChanges();
}

But you will get a nice Exception has below:

System.Data.Entity.Infrastructure.DbUpdateException:

An error occurred while saving entities that do not expose foreign key properties for their relationships.

The EntityEntries property will return null because a single entity cannot be identified as the source of the exception.

Handling of exceptions while saving can be made easier by exposing foreign key properties in your entity types.

See the InnerException for details. —>

System.Data.Entity.Core.UpdateException: A relationship from the ‘UserSetting_User’ AssociationSet is in the ‘Deleted’ state.

Given multiplicity constraints, a corresponding ‘UserSetting_User_Source’ must also in the ‘Deleted’ state.

So this means that EF does not understand that we want to delete the Child object. So inside the scope of our Database Context we have to do this:

using (DbContext context = new DbContext)
{
   var user = context.Set<User>()
                     .Include(x => x.Settings).First();
   var setting = user.Settings.First();
   user.DeleteSetting(setting);

   // inform EF
   context.Entry(setting.Id).State = EntityState.Deleted;

   context.SaveChanges();
}

I have searched a lot about this problem and actually you can read from the Entity Framework team that this is a feature that is still not available for the product:

http://data.uservoice.com/forums/72025-entity-framework-feature-suggestions/suggestions/1263145-delete-orphans-support

NHibernate Fetch strategies

In this blog post I want to illustrate how we can eager load child and parent objects inside memory using NHibernate and how to avoid the nasty problem of creating Cartesian products. I will show you how this can be achieved using the three different type of Query pattern implemented inside NHibernate.

For this example I am using the version 3.3 of NHibernate against a SQLite database to have some quick “in memory” tests.

The Domain Model

My model is quite straighforward, is composed by a Person entity and two child collections, Address and Phone, like illustrated in the following picture:

image

For the Id I am using a System.Guid data type, for the collections I am using an IList<T> and the mapping is achieved using <Bag> with the inverse=”true” attribute. I don’t write the remaining mapping for simplicity.

<class name="Person" abstract="false" table="Person">
  <id name="Id">
    <generator class="guid.comb" />
  </id>

  <property name="FirstName" />
  <property name="LastName" />

  <bag name="Addresses" inverse="true" table="Address" cascade="all">
    <key column="PersonId" />
    <one-to-many class="Address"/>
  </bag>

  <bag name="Phones" inverse="true" table="Phone" cascade="all">
    <key column="PersonId" />
    <one-to-many class="Phone"/>
  </bag>
</class>

NHibernate Linq

With the Linq extension for NHibernate, I can easily eager load the two child collections using the following syntax:

Person expectedPerson = session.Query<Person>()
    .FetchMany(p => p.Phones)
        .ThenFetch(p => p.PhoneType)
    .FetchMany(p => p.Addresses)
    .Where(x => x.Id == person.Id)
    .ToList().First();

The problem of this query is that I will receive a nasty Cartesian product. Why? Well let’s have a look at the SQL generated by this Linq using NHibernate  profiler:

image

In my case I have 2 Phone records and 1 Address record that belong to the parent Person. If I have a look at the statistics I can see that the total number of rows is wrong:

image

Unfortunately, if I write the following test, it passes, which means that my Root Aggregate entity is wrongly loaded:

// wrong because address is only 1
expectedPerson.Addresses
   .Count.Should().Be(2, "There is only one address");
expectedPerson.Phones
   .Count.Should().Be(2, "There are two phones");

The solution is to batch the collections into two different query, without affecting too much the Database performances. In order to achieve this goal I have to use the Future syntax and tell to NHibernate to build a Root Aggregate with three database batch calls:

// create the first query
var query = session.Query<Person>()
      .Where(x => x.Id == person.Id);
// batch the collections
query
   .FetchMany(x => x.Addresses)
   .ToFuture();
query
   .FetchMany(x => x.Phones)
   .ThenFetch(p => p.PhoneType)
   .ToFuture();
// execute the queries in one roundtrip
Person expectedPerson = query.ToFuture().ToList().First();

Now if I profile my query, I can see that the entities loaded are loaded using 3 SQL queries but batched together into one single database call:

image

Regarding the performances, this is the difference between a Cartesian product and a Batch call:

image

NHibernate QueryOver

The same mechanism is available also for the QueryOver<T> component, we can instruct NHibernate to create a left outer join, and get a Cartesian product, like the following statement:

Phone phone = null;
PhoneType phoneType = null;
// One query
Person expectedPerson = session.QueryOver<Person>()
    // Inner Join
    .Fetch(p => p.Addresses).Eager
    // left outer join
    .Left.JoinAlias(p => p.Phones, () => phone)
    .Left.JoinAlias(() => phone.PhoneType, () => phoneType)
    .Where(x => x.Id == person.Id)
    .TransformUsing(Transformers.DistinctRootEntity)
    .List().First();

As you can see here I am trying to apply the transformer DistinctRootEntity, but unfortunately the transformer does not work if you eager load more than 1 child collection, because the Database returns more than 1 instance of the same Root Aggregate.

Also in this case, the alternative is to Batch the collections and send 3 queries to the Database in one round trip:

Phone phone = null;
PhoneType phoneType = null;
// prepare the query
var query = session.QueryOver<Person>()
    .Where(x => x.Id == person.Id)
    .Future();
// eager load in one batch the first collection
session.QueryOver<Person>()
    .Fetch(x => x.Addresses).Eager
    .Future();
// second collection with grandchildren
session.QueryOver<Person>()
    .Left.JoinAlias(p => p.Phones, () => phone)
    .Left.JoinAlias(() => phone.PhoneType, () => phoneType)
    .Future();
// execute the query
Person expectedPerson = query.ToList().First();

Personally, the only thing that I don’t like about QueryOver<T> is the syntax, as you can see from my complex query I need to create some empty pointers to the object Phone and PhoneType. I don’t like it because when I batch 3,  4 collections I always come up with 3, 4 variables that are quite ugly and useless.

NHibernate HQL

HQL is a great query language, it allows you to really write any kind of complex query and the biggest advantage, compared to Linq or QueryOver<T> is the fully support by the framework.

The only downside is that it requires “magic strings”, so you must be very careful on what query you write because it is very easy to write wrong queries and get a nice runtime exception.

So, also in this case, I can eager load everything in one shot, and get again a Cartesian product:

Person expectedPerson =
    session.CreateQuery(@"
    from Person p 
    left join fetch p.Addresses a 
    left join fetch p.Phones ph 
    left join fetch ph.PhoneType pt
    where p.Id = :id")
        .SetParameter("id", person.Id)
        .List<Person>().First();

Or batch 3 different HQL query in one Database call:

// prepare the query
var query = session.CreateQuery("from Person p where p.Id = :id")
        .SetParameter("id", person.Id)
        .Future<Person>();
// eager load first collection
session.CreateQuery("from Person p 
                     left join fetch p.Addresses a where p.Id = :id")
        .SetParameter("id", person.Id)
        .Future<Person>();
// eager load second collection
session.CreateQuery("from Person p
                     left join fetch p.Phones ph 
                     left join fetch ph.PhoneType pt where p.Id = :id")
        .SetParameter("id", person.Id)
        .Future<Person>();

Eager Load vs Batch

Actually I run some tests in order to understand if the performances are better by:

  • Running an eager query and clean manually the duplicated records
  • Run a batch set of queries and get a clean Root Aggregate

These are my results:

image

And surprisingly the eager load + C# cleanup is slower than the batch call. Smile

Domain Model in Brief

This series is getting quite interesting so I decided to post something about Domain Model. Just a little introduction to understand what is and what is not Domain Model.

Honestly this pattern is often misunderstood. I see tons of examples of anemic models that reflect 1:1 the database structure, mirroring the classic Active Record pattern. But Domain Model is more than that …

Let’s start as usual with a definition, and we take this definition from one of the founder of the Domain Model pattern, Martin Fowler:

An object model of the domain that incorporates both behavior and data.

But unfortunately I almost never see the first part (behaviors). It’s easy to create an object graph, completely anemic, and then hydrate the graph using an ORM that retrieves the data from the database; but if we skip the logical part of the object graph, we are still far from having a correct Domain Model.

This means that we can have anemic objects that represent our data structure, but we can’t say that we are using Domain Model.

Our Domain Model

In order to better understand the Domain Model pattern we need a sample Domain, something that aggregates together multiple objects with a common scope.

In my case, I have an Order Tracking System story that I want to implement using the Domain Model pattern. My story has a main requirement:

As a Customer I want to register my Order Number and e-mail, in order to receive updates about the Status of my Order

So, let’s draw few acceptance criteria to register an Order:

  • An Order can be created only by a user of type Administrator and should have a valid Order id
  • An Order can be changed only by a user of type Administrator
  • Any user can query the status of an Order
  • Any user can subscribe and receive updates about Order Status changes using an e-mail address

If we want to represent this Epic using a Use Case diagram, we will probably end up with something like this:

image

Great, now that we have the specifications, we can open Visual Studio (well I did it previously when I created the diagram …) and start to code. Or better, start to make more investigations about our Domain objects.

It’s always better to have an iteration 0 in DDD when you start to discover the Domain and the requirements together with your team. I usually discover my Domain using mockups like the following one, where I share ideas and concepts in a fancy way.

Snapshot

Create a new Order

An Agent can created an Order and the Order should have a proper Order Id. The order id should reflect some business rules so it’s ok to have this piece of validation logic inside our domain object. We can also say that the Order Id is a requirement for the Order object because we can’t create an Order object without passing a valid Order Id. So, it makes absolutely sense to encapsulate this concept into the Order entity.

A simple test to cover this scenario would be something like this:

[Fact]
public void OrderInitialStateIsCreated()
{
    this
        .Given(s => s.GivenThatAnOrderIdIsAvailable())
        .When(s => s.WhenCreateANewOrder())
        .Then(s => s.ThenTheOrderShouldNotBeNull())
            .And(s => s.TheOrderStateShouldBe(OrderState.Created))
        .BDDfy("Set Order initial Status");
}

image

If you don’t know what [Fact] is, it is used by xUnit, an alternative test framework that can run within Visual Studio 2013 test runner using the available nuget test runner extension package.

From now on, we have our first Domain Entity that represents the concept of Order. This entity will be my Aggregate root, an entity that bounds together multiple objects of my Domain, in charge of guarantee consistency of changes made to those objects.

Domain Events

M. Fowler defines a Domain event in the following way:

A Domain event is an event that captures things in charge of changing the state of your application

Now, if we change the Order Status we want to be sure that an event is fired by the Order object, which inform us about the Status change. Of course this event will not be triggered when we create a new Order object. The event should contains the Order Id and the new Status. In this way we will have the key information for our domain object and we may not be required to repopulate the object from the database.

public void SetState(OrderState orderState)
{
    state = orderState;
    if(OrderStateChanged != null)
    {
        OrderStateChanged(new OrderStateChangedArgs(this.orderId, this.state));
    }
}

Using the Domain Event I can easily track the changes that affect my Order Status and rebuild the status in a specific point in time (I may be required to investigate the order). In the same time I can easily verify the current status of my Order by retrieving the latest event triggered by the Order object.

The ubiquitous language

With my Order object create, I need now to verify that the behaviors assigned to it are correct, and the only way to verify that is to contact a Domain expert, somebody expert in the field of Order Tracking System. Probably witht this person I will have to speak a common language, the ubiquitous language mentioned by Eric Evans.

With this language in place, I can easily write some Domain specifications that guarantee the correctness of the Domain logic, and let the Domain Expert verifies it.

image

This very verbose test, is still a unit test and it runs in memory. So I can easily introduce BDD into my application without the requirement of having an heavy test fitness behind my code. BDDfy allows me to produce also a nice documentation for the QA, in order to analyze the logical paths required by the acceptance criteria.

Plus, I am not working anymore with mechanical code but I am building my own language that I can share with my team and with the analysts.

Only an Administrator can Create and Change an Order

Second requirement, now that we know how to create an Order and what happen when somebody changes the Order Status, we can think of creating an object of type User and distinguish between a normal user and an administrator. We need to keep this concept again inside the Domain graph. Why? Well, very simple, because we choose to work with the Domain Model pattern, so we need to keep logic, behaviors and data within the same object graph.

So, the requirements are the following:

  • When we create an Order we need to know who you are
  • When we change the Order status, we need to know who you are

In Domain Driven Design we need to give responsibility of this action to somebody, that’s overall the logic that you have to apply when designing a Domain Model. Identify the responsibility and the object in charge of it.

In our case we can say to the Order object that, in order to be created, it requires an object of type User and verify that the user passed is of type Administrator. This is one possible option, another one could be to involve an external domain service, but in my opinion is not a crime if the Order object is aware of the concept being an administrator.

So below are my refactored Behavior tests:

image

The pre-conditions raised in the previous tests:

  • order id is valid
  • user not null
  • user is an administrator

are inside my Order object constructor, because in DDD and more precisely in my Use Case, it doesn’t make any sense to have an invalid Order object. So the Order object is responsible to verify the data provided in the constructor is valid.

private Order(string orderId, User user)
{
    Condition
       .Requires(orderId)
       .Contains("-","The Order Id has invalid format");
    Condition
       .Requires(user)
       .IsNotNull("The User is null");
    Condition
       .Requires(user.IsAdmin)
       .IsTrue("The User is not Administrator");

    this.orderId = orderId;
}

Note: for my assertions I am using a nice project called Conditions that allows you to write this syntax.

Every time I have an Order object in my hands I know already that is valid, because I can’t create an invalid Order object.

Register for Updates

Now that we know how to create an Order we need to wrap the event logic into a more sophisticated object. We should have a sort of Event broker also known as Message broker able to monitor for events globally.

Why? Because I can imagine that in my CQRS architecture I will have a process manager that will receive commands and execute them in sequence; while the commands will execute the process manager will also interact with the events raised by the Domain Models involved in the process.

I followed the article of Udi Dahan available here and I found a nice and brilliant solution of creating objects that act like Domain Events.

The final result is something like this:

public void SetState(OrderState orderState)
{
    state = orderState;
    DomainEvents.Raise(
        new OrderStateChangedEvent(this.orderId, this.state));
}

The DomainEvents component is a global component that use an IoC container in order to “resolve” the list of subscribers to a specific event.

Next problem, notify by e-mail

When the Order Status changes, we should persist the change somewhere and we should also notify the environment, probably using a Message Broker.

We have multiple options here, I can just picture some of them, for example:

  • We can associate an Action<T> to our event, and raise the action that call an E-mail service
  • We can create a command handler in a different layer that will send an E-mail
  • We can create a command handler in a different layer that will send a Message to a Queue, this Queue will redirect the Message to an E-mail service

The first two options are easier and synchronous, while the third one would be a more enterprise solution.

The point is that we should decide who is responsible to send the e-mail and if the Domain Model should be aware of this requirement.

In my opinion, in our specific case, we have an explicit requirement, whatever there is a subscribed user, we should notify. Now, if we are smart we can say it should notify to a service and keep the Domain Model loosely coupled from the E-mail concept.

So we need to provide a mechanism to allow a User to register for an Update and verify that the User receives an E-mail.

image

I just need a way to provide an E-mail service, a temporary one, that I will implement later when I will be done with my domain. In this case Mocking is probably the best option here, and because my DomainEvents manager is providing methods using C# generics, I can easily fake any event handler that I want:

var handlerMock = 
      DomainEvents.Container
         .Resolve<Mock<IHandles<OrderStateChangedEvent>>>();
handlerMock
      .Verify(x => x.Handle
                        (It.IsAny<OrderStateChangedEvent>()), 
                         Times.Once());

Now, if you think for a second, the IHandles interface could be any contract:

  • OrderStateChangedEmailHandler
  • OrderStateChangedPersistHandler
  • OrderStateChangedBusHandler

Probably you will have an Infrastructure service that will provide a specific handler, a data layer that will provide another handler and so on. The business logic stays in the Domain Model and the event implementation is completely loosely coupled from the Domain. Every time the Domain raise an event, then the infrastructure will catch it and broadcast it to the appropriate handler(s).

Conclusion

The sample shown previously is a very simple object graph (1 object) that provides behaviors and data. It fits perfectly into BDD because the business logic is behavior driven thanks to the ubiquitous language and it does not involve any external dependency (services or persistence).

We can test the logic of our Domain in complete isolation without having the disadvantage of an anemic Domain that does not carry any “real” business value. We built a language that can be shared between the team and the analysts and we have tests that speak.

Probably an additional functionality could be to design the Commands in charge of capture the intent of the user and start to test the complete behavior. So that we will have an infrastructure where the system intercepts commands generated by the User and use these command to interact with the Domain Model.

Again, even for Domain Model the rule is the same than any other pattern (architectural and not). It does not require a Data Layer, it does not require a Service Bus and so on. The Domain Model should contain behaviors and data, then the external dependencies are external dependencies …

Now, if you will do your homeworks properly, you should be able to have some good fitness around your Domain Model like the following one:

image

BDD frameworks for NET

When I work in a project that implies Business Logic and Behaviors I usually prefer to write my unit tests using a Behavioral approach, because in this way I can easily projects my acceptance criteria into readable piece of code.

Right now I am already on Visual Studio 2013 and I am struggling a bit with the internal test runner UX, because it doesn’t really fit the behavioral frameworks I am used to work with.

So I decided to try out some behavioral frameworks with Visual Studio 2013. I have a simple piece of functionality that I want to test. When you create a new Order object, if the Order Id is not provided with the right format, the class Order will throw an ArgumentException.

If I want to translate this into a more readable BDD sentence, I would say:

Given that an Order should be created
When I create a new Order
Then the Order should be created only if the Order Id has a valid format
Otherwise I should get an error\

So let’s see now the different ways of writing this test …

XBehave

XBehave is probably the first framework I tried for behavioral driven tests. The implementation is easy but in order to get it working properly you have to write your tests on top of xUnit test framework, and of course you have also to install the xUnit test runner plugin for Visual Studio.

In order to write a test, you need to provide the Given, When and Then implementations. Each step has a description and a delegate that contains the code to be tested. Finally, each method must be decorated with a [Scenario] attribute.

[Scenario]
[Example("ABC-12345-ABC")]
public void CreatingAnOrderWithValidData(string orderId, Order order)
{
    _
        .When("creating a new Order",
                   () => order = Order.Create(orderId))            
        .Then("the Order should be created",
                   () => order.Should().NotBeNull());
}

In this specific case I am using the [Example] attribute to provide dynamic data to my test, so that I can test multiple scenarios in one shot.

image

The problem? For each step of my test, Visual Studio test runner identifies a unique test, which is not really true and quite unreadable because this is one method, one test but in Visual Studio I see a test row for each delegate passed to XBehave.

NSpec

NSpec is another behavioral framework that allows you to write human readable tests with C#. The installation is easy, you just need to download the Nuget package, the only problem is the interaction with Visual Studio. In order to run the tests you need to call the test runner of NSpec, which is not integrated into the test runner of Visual Studio, so you need to read the test results on a console window, without the availability to interact with the failed tests.

In order to test your code you need to provide the Given,When and Then like with any other behavioral framework. The difference is that this framework is making an extensive use of magic strings and it assumes the developer will do the same.

void given_an_order_is_created_with_valid_data()
{
    before = () => order = Order.Create("xxx-11111-xxx");
    it["the order should be created"] = 
         () => order.should_not_be_null();
}

And this is the result you will get in Visual Studio Nuget package console (as you can see, not really intuitive and easy to manage):

image 

Honestly the test syntax is quite nice and readable but I found annoying and silly that I have to run my tests over a console screen, come’n the interaction with Visual Studio UI is quite well structured and this framework should at least prints the output of the tests into the test runner window.

StoryQ

I came to StoryQ just few days ago, I was googling about “fluent behavioral framework”. The project is still on www.codeplex.com but it looks pretty inactive, so take it as is. There are few bugs and code constraints that force you to implement a very thigh code syntax, thigh to the StoryQ conventions.

The syntax is absolutely fluent and it’s very readable, compared to other behavioral frameworks, I guess this is the nearest to the developer style.

new StoryQ.Story("Create Order")
.InOrderTo("Create a valid Order")
.AsA("User")
.IWant("A valid Order object")

.WithScenario("Valid order id")
    .Given(IHaveAValidOrderId)
        .And(OrderIsInitialized)
    .When(ANewOrderIsCreated)
    .Then(TheOrderShouldNotBeNull)

.ExecuteWithReport(MethodBase.GetCurrentMethod());

And the output is fully integrated into Visual Studio test runner, just include the previous code into a test method of your preferred unit test framework.

image

The output is exactly what we need, a clear BDD output style with a reference to each step status (succeed, failed, exception …)

BDDify

BBDify is the lightest framework of the one I tried out and honestly is also the most readable and easy to implement. You can decide to use the Fluent API or the standard conventions. With both implementations you can easily build a dictionary of steps that can be recycled over and over.

In my case I have created a simple Story in BDDfy and represented the story using C# and no magic strings:

[Fact]
public void OrderIsCreatedWithValidId()
{
    this.
        Given(s => s.OrderIdIsAvailable())
        .When(s => s.CreateANewOrder())
        .Then(s => s.OrderShouldNotBeNull())
        .BDDfy("Create a valid order");
}

And the test output really fit into Visual Studio test runner, plus I can also retrieve in the test output my User Story description, so that my QAs can easily interact with the tests results:

image

I really like this layout style because I can already picture this layout into my TFS reports.

I hope you enjoyed!

Castle WCF Facility

In the previous post we saw how to provide Inversion of Control capabilities to a WCF service. That’s pretty cool but unfortunately it requires a big refactoring if you plan to apply the Dependency Inversion pattern on an existing WCF project.

Today we will see an interesting alternative provided by Castle project. If you are not aware of Castle, you can have a look at the community web site here. Castle provides a set of tools for NET, Active Record, Dynamic Proxy, Windsor and a set of facilities that you can easily plug into your code.

The demo project

Before starting to have  a look at Castle Wcf Facility we need a new WCF project and a WCF web site to host our service. The idea is to create an empty service and host it, then we will refactor the project to include Castle Windsor and WCF facility.

Let’s start by creating an IIS web site, a DNS redirect and a folder for our IIS web site. The final result will be like the following one:

image

Now we need to create the new project in Visual Studio. I have created a solution that contains 2 projects:

  • Empty WCF Service Library named “Service.Contracts
  • Empty WCF Service Library named “Service.Implementations
    • reference the project “Service.Contracts

And this would be the final result of this step:

image

We have just implemented the correct way of creating a WCF service implementation and a WCF service contract. The Client will have a reference to the contract assembly in order to be able to work with the service implementation, without the need of using a Proxy. This concept is outside the fact that I will use an ioc container, it’s just a lot easier than adding a “web service reference” in visual studio and carry tons of useless proxy classes in your project.

Hosting WCF service

The final step is to host this WCF Service inside an IIS ASP.NET or ASP.NET MVC website. I have create a new “web site application” and from the options I choose “WCF service application”. Then I select my local IIS for the development path, but this is up to you.

image

Now Visual Studio will ask you if you want to override the current web site or use the existing content. If you choose the second option you will have to provide also web.config configuration and initial setup, so I kindly suggest the first option.

If everything went ok you should be able to visit the default service created by Visual Studio at this url: http://staging.service.local/Service.svc.

Now we need to change the content and the name of the .svc file, but before we need to add a reference for Contracts and Implementations from the solution explorer!

[sourcecode language='php' ]
<%@ ServiceHost 
    Language="C#" Debug="true" 
    Service="Service.Implementations.ServiceImplementation" %>
[/sourcecode]

We need to reference the Implementation because we don’t have a factory yet able to translate the IServiceContract into a ServiceImplementation class. So the next step is to install in our solution the WCF castle facility nuget package.

image

Add WCF facility

So far we didn’t need any external tool because we have an empty service created by default by Visual Studio with a dummy code, or probably during the tutorial you have already changed your service to so something more interesting. The next step is to add a facility in order to be able to change the default factory used by IIS to create an instance of our service.

The first step is to add a new global.asax file if you don’t have one already and plug the Windsor container inside the Application_Start event.
This is my global.asax file:

[sourcecode language='php' ]
<%@ Application Language="C#" Inherits="ServiceApplication" %>
[/sourcecode]

Now, in the code behind file we want to create an instance of a new Windsor Container during the Application_Start event:

[sourcecode language='csharp' ]
public class ServiceApplication : HttpApplication
{
    public static WindsorContainer Container { get; set; }

    protected void Application_Start()
    {
        BuildContainer();
    }
}
[/sourcecode]

And this is the code to initialize a new container. In this code we need to pay attention in the way we specify the server name, because WCF facility uses the service attribute value to search inside the Windsor container. So, this is the code to register the service:

[sourcecode language='csharp' ]
private void BuildContainer()
{
    Container = new WindsorContainer();
    Container.Kernel.AddFacility<WcfFacility>();
    Container.Kernel.Register(Component.For<IServiceContract>()
                                        .ImplementedBy<ServiceImplementation>()
                                        .Named("ServiceContract"));
}
[/sourcecode]

And this is the change we need to apply to the Service.svc file:

[sourcecode language='php' ]
<%@ ServiceHost 
    Language="C#" Debug="true" 
    Service="ServiceContract"
    Factory="Castle.Facilities.WcfIntegration.DefaultServiceHostFactory, Castle.Facilities.WcfIntegration" %>
[/sourcecode]

Now your service is ready to use dependency injection under the power of Castle Windsor.

Add a dependency to a WCF service using Windsor

Simple example, I have a service that requires an ILogger injected in it. I need to change the code for my service to allow injection and then I need to register the logger in the global.asax file during the Windsor container activation process.

[sourcecode language='csharp' ]
public class ServiceImplementation : IServiceContract
{
    private readonly ILogger logger;

    public ServiceImplementation(ILogger logger)
    {
        this.logger = logger;
    }

    public string GetData(int value)
    {
        this.logger.Trace("Method started!");
        return string.Format("You entered: {0}", value);
    }
}
[/sourcecode]

And this how we change the registration in the container:

[sourcecode language='csharp' ]
Container.Kernel.Register(
    // register the logger
    Component.For<ILogger>()
        .ImplementedBy<TraceLogger>(),
    // register the service
    Component.For<IServiceContract>()
        .ImplementedBy<ServiceImplementation>()
        .Named("ServiceContract"));
[/sourcecode]

The source code is available here:

http://sdrv.ms/16BLYoI

NHibernate cache system. Part 3

In this series of articles we saw how the cache system is implemented in NHibernate and what can we do in order to use it. We also saw that we can choose a cache provider but we didn’t have a look yet at what providers we can use.

I personally have my own opinion about the 2nd level cache providers available for NHibernate 3.2 and I am more than happy if you would like to share with me your experience about it.

Below is a list of the major and the most famous 2nd level cache providers I know:

image

I would personally suggest you to answer the following questions in order to understand what is the cache provider you need for your project:

  • Size of you project (small, standard, enterprise)
  • Cache technology you have already in place
  • Quality attributes required by your solution (scalability, security, …)

SysCache

SysCache and the most recent SysCache2 is a cache provider built on top of the old ASP.NET cache system. It is available from ASP.NET cache provider (NHibernate.Cache.SysCache.dll)

It is an abstraction over ASP.NET cache so it can’t be used in a non-web application. It works but Microsoft suggest to do not use it for non-web applications.

The cache space is not configurable so it is the same for different session factories … really dangerous on a web application that requires isolation between the various users (http://ayende.com/blog/1708/nhibernate-caching-the-secong-level-cache-space-is-shared)

Useful for: small in house projects, better for web projects hosted in IIS

NCache

NCache is a distributed in-memory Object cache and a distributed ASP.NET Session State manager product.

It is able to synchronizes cache across multiple servers so it is designed also for the enterprise.

It provides dynamic clustering & cache configuration for 100% uptime for a real scalable architecture.

Cache reliability through data replication across servers

InProc/OutProc cache for multiple processes on same machine

API identical to ASP.NET Cache

It is available for download and trial here http://www.alachisoft.com/ncache/edition-comparison.html, it is a third party provider and it is not free.

Useful for: medium to big applications that are designed to be scalable

MemCache

MemCache is a famous Linux cache system designed exclusively for the enterprise. It is a complex enterprise cache system based on Linux platform that provide a cache mechanism also for NHibernate.
It is pretty easy to be scaled on a big server farm because it is designed to do so
It does not require licensing cost because it’s an OSS and it is a well known system with a big community.

The following picture represents the core logic of MemCache:

image

The only downside is that it requires a medium knowledge of Linux OS in order to be able to install and configured it.

Useful for: enterprise applications that are designed to be scalable

Velocity, a.k.a. AppFabric

Velocity has been now integrated in AppFabric and it is the cache system implemented by Microsoft for the enterprise. It requires AppFabric and IIS and it can be used locally or with Azure (does it make sense to cache in the cloud?? Flirt male).

  • AppFabric Caching, provides local caching, bulk updates, callbacks for updates, etc… so this is why it’s exciting over something like MemCache which doesn’t provide these features Out of the Box.
  • For enterprise architectures, really scalable, Microsoft product (there may be a license requirement)

Useful for: enterprise applications that are designed to be scalable

NHibernate cache system. Part 2

In the previous post we saw how the cache system is structured in NHibernate and how it works. We saw that we have different methods to play with the cache (Evict, Clear, Flush …) and they are all associated with the ISession object because the cache of level 1 is associated with the lifecycle of an ISession object.

In this second article we will see how the second level cache works and how it is associated with the ISessionFactory object that is in charge of controlling this cache mechanism.

Second Level cache architecture

How does the second level cache work?

image

First of all, when an entity is cached in the second level cache, the entity is disassembled into a collection of keys/values pair, like a dictionary and persisted in the cache repository. This mechanism is accomplished because most of the second level cache providers are able to persist serialized dictionary collections and because in the same time NHibernate does not force you to make serializable your entities (something that IMHO, should never be done!!).

A second mechanism happens when we cache the result of a query (Linq, HQL, ICriteria) because these results can’t be cached using the first level cache (see previous blog post). After we cache a query result, NHibernate will cache only the unique identifiers of the entities involved in the result of the query.

Third, NHibernate has an internal mechanism that allows him to know and keep track of a timestamp value used to write tables or to work with sessions. How does it work? Well the mechanism is pretty clear, it keeps track of when the last table was written too. A series of mechanism will update this timestamp information and you can find a better explanation of Ayende’s blog: http://ayende.com/blog/3112/nhibernate-and-the-second-level-cache-tips.

Configuration of the second level cache

By default the second level cache is disabled. If you need to use the second level cache you have to let NHibernate know about that. The hibernate.cfg file has a dedicated section of parameters that should be used to enable the second level cache:

<property name="cache.provider_class">
   NHibernate.Cache.HashtableCacheProvider
</property>
<!-- You have to explicitly enable the second level cache ->
<property name="cache.use_second_level_cache">
   true
</property> 

First of all we specify the cache provider we are using, in this case I am using the standard hashtable provider, but I will show you in the next article what are the real providers you should use. Second we say that the cache should be enabled; this part is really important because if you do not specify that the cache is enable, it simply won’t work … Confused smile

Then you may provide to the cache a default expiration in seconds:

<!-- cache will expire in 2 minutes -->
<property name="cache.default_expiration">120</property>

If you want to add additional configuration properties, they will be cache provider specific!

Cache by mapping

One of the possible configuration is to enable the cache at the entity level. This means that we are marking our entity as “cachable”.

<class
   name="CachableProduct“
   table="[CachableProduct]“
   dynamic-insert="true“
   dynamic-update="true">
   <cache usage="read-write"/>

In order to do that we have to introduce a new tag, the <cache> tag. In this tag we can specify different type of “”usage”:

  • Read-write

    It should be used if you plan also to update the data (no with serializable transaction)

  • Read-only

    Simplest and best performing, for read only access

  • Nonstrict-read-write

    If you need to occasionally update the data. You must commit the transaction

  • Transactional

    not documented/implemented yet because no one cache provider allows transactional cache. It is implemented in the Java version because J2EE allow transactional second level cache

Now, if we write a simple test that will create some entities and will try to retrieve them using two different ISession generated by the same ISessionFactory we will get the following behavior:

using (var session = factory.OpenSession())
{
   // create the products
    using (var tx = session.BeginTransaction(IsolationLevel.ReadCommitted))
    {
        Console.WriteLine("*** FIRST SESSION ***");
        var expectedProduct1 = session.Get<CachableProduct>(productId);
        Assert.That(expectedProduct1, Is.Not.Null);
        tx.Commit();
        // retrieve the number of hits we did to the 2nd level cache
        Console.WriteLine("Second level hits {0}", 
           factory.Statistics.SecondLevelCacheHitCount);
    }
}
using (var session = factory.OpenSession())
{
    using (var tx = session.BeginTransaction(IsolationLevel.ReadCommitted))
    {
        Console.WriteLine("*** SECOND SESSION ***");
        var expectedProduct1 = session.Get<CachableProduct>(productId);
        Assert.That(expectedProduct1, Is.Not.Null);
        tx.Commit();
        // retrieve the number of hits we did to the 2nd level cache
        Console.WriteLine("Second level hits {0}", 
           factory.Statistics.SecondLevelCacheHitCount);
    }
}

The result will be the following:

image

As you can see the second session will access the 2nd level cache using a transaction and will not use the database at all. This has been accomplished just by mapping the entity with the <cache> tag and by using the GET<T> method.

Let’s make everything a little bit more complex. Let’s assume for a second that our object is an aggregate root and it is more complex than the previous one. If we want to cache also a collection of child or a parent reference we will need to change our mapping in the following way:

<!-- inside the product mapping file -->
<bag name="Attributes" cascade="all" inverse ="true">
  <cache usage="read-write"/>
  <key column="ProductId" />
  <one-to-many class="CacheAttribute"/>
</bag> 
<!-- inside theCacheAttribute file -->
<class
    name="CacheAttribute"
    table="[CacheAttribute]"
    dynamic-insert="true"
    dynamic-update="true">
  <cache usage="read-write"/>
  <!-- omit -->
  <many-to-one 
     class="CachableProduct" 
     name="Product" cascade="all">   
    <column name="ProductId" />
  </many-to-one>
</class>

Now we can execute the following test (I am omitting some parts for saving space, I hope you don’t mind …)

using (var session = factory.OpenSession())
{
    using (var tx = session.BeginTransaction(IsolationLevel.ReadCommitted))
    {
        Console.WriteLine("*** FIRST SESSION ***");
        var expectedProduct1 = session.Get<CachableProduct>(productId);
        Assert.That(expectedProduct1, Is.Not.Null);
        Assert.That(expectedProduct1.Attributes, Has.Count.GreaterThan(0));
        tx.Commit();
        Console.WriteLine("Second level hits {0}", 
           factory.Statistics.SecondLevelCacheHitCount);
    }
 
}
using (var session = factory.OpenSession())
{
    using (var tx = session.BeginTransaction(IsolationLevel.ReadCommitted))
    {
        Console.WriteLine("*** SECOND SESSION ***");
        var expectedProduct1 = session.Get<CachableProduct>(productId);
        Assert.That(expectedProduct1, Is.Not.Null);
        Assert.That(expectedProduct1.Attributes, Has.Count.GreaterThan(0));
        tx.Commit();
        Console.WriteLine("Second level hits {0}", 
           factory.Statistics.SecondLevelCacheHitCount);
    }
}

And this is the result from the profiled SQL:

image

In this case the second ISession is calling the cache 4 times in order to resolve all the objects (2 products x 2 categories).

Cache a query result

Another way to cache our result is by creating a cachable query that is slightly different than creating a cachable object.

Important note:

In order to cache a query we need to set the query as “cachable” and then set the corresponding entity as “cachable” too. Otherwise NHB will cache the ID of the entity but then it will always fetch the entity and cache only the query result.

To write a cachable query we need to implement an IQuery object in the following way:

using (var session = factory.OpenSession()) {
     using (var tx = session.BeginTransaction(IsolationLevel.ReadCommitted))
     {
        var result = session
             .CreateQuery("from CachableProduct p where p.Name = :name")
             .SetString("name", "PC")
             .SetCacheable(true)
             .List<CachableProduct>();
        tx.Commit();
     }
 }

Now, let’s try to write a unit test for this:

using (var session = factory.OpenSession())
{
    using (var tx = session.BeginTransaction(IsolationLevel.ReadCommitted))
    {
        Console.WriteLine("*** FIRST SESSION ***");
        var result = session
            .CreateQuery("from CachableProduct p where p.Name = :name")
            .SetString("name", "PC")
            .SetCacheable(true)
            .List<CachableProduct>();
        Console.WriteLine("Cached queries {0}", 
             factory.Statistics.QueryCacheHitCount);
        Console.WriteLine("Second level hits {0}", 
             factory.Statistics.SecondLevelCacheHitCount);
        tx.Commit();
    }
}
using (var session = factory.OpenSession())
{
    using (var tx = session.BeginTransaction(IsolationLevel.ReadCommitted))
    {
        Console.WriteLine("*** SECOND SESSION ***");
        var result = session
            .CreateQuery("from CachableProduct p where p.Name = :name")
            .SetString("name", "PC")
            .SetCacheable(true)
            .List<CachableProduct>();
        Console.WriteLine("Cached queries {0}", 
             factory.Statistics.QueryCacheHitCount);
        Console.WriteLine("Second level hits {0}", 
             factory.Statistics.SecondLevelCacheHitCount);
        tx.Commit();
    }
}

And this is the expected result:

image

In this case the cache is telling us that the second session has 1 query result cached and that we called it once.

Final advice

As you saw using the 1st and 2nd level cache is a pretty straightforward process but it requires time and understanding of NHibernate cache mechanism. Below are some final advice that you should keep in consideration when working with the 2nd level cache:

  • 2nd Level Cache is never aware of external database changes!
  • Default cache system is hashtable, you must use a different one
  • Wrong implementation of the 2nd level cache may result in a non expected performance degrade (i.e. hashtable doc)
  • First level cache is shared across same ISession, second level is shared across same ISessionFactory

In the next article we will see what are the available cache providers.

NHibernate cache system. Part 1

In this series of articles I will try to explain you how NHibernate cache system works and how it should be used in order to get the best performance/configuration from this product.

NHibernate Cache architecture

NHibernate has an internal cache architecture that I will define absolutely well done. On an architectural point of view, it is designed for the enterprise and it is 100% configurable. Consider that it allows you to create also your custom cache provider!

The following picture show the cache architecture overview of NHibernate (actually the version I am talking about is the 3.2 GA).

image

The cache system is composed by two levels, the cache of level 1 that usually it is configured by default if you are working with the ISession object, and the cache of level 2 that by default is disabled.

The cache of level 1 is provided by the ISession data context and it is maintained by the lifecycle of the ISession object, this means that as soon as you destroy (dispose) an ISession object, also the cache of level 1 will be destroyed and all the corresponding objects will be detached from the ISession. This cache system works on a per transaction basis and it is designed to reduce the number of database calls during the lifecycle of an ISession. As an example, you should use this cache if you have the need to access and modify an object in a transaction, multiple times.

The cache of level 2 is provided by the ISessionFactory component and it is shared across all the session created using the same factory. Its lifecycle correspond to the lifecycle of the session factory and it provides a more powerful but also dangerous set of features. It allows you to keep objects in cache across multiple transactions and sessions; the objects are available everywhere and not only on the client that is using a specific ISession

Cache Level 1 mechanism

As soon as you start to create a new ISession (not an IStatelessSession!!) NHibernate starts to holds in memory, using a specific mechanism, all the objects that are involved with the current session. The methods used by NHibernate to load the data into the cache are two: Get<T>(id) and Load<T>(id). This means that if you try to load one or more entities using: LinQ, HQL, ICriteria … NHibernate will not put them into the cache of level 1.

Another way to put an object into the lvl1 cache is to use persistence methods like Save, Delete, Update and SaveOrUpdate.

image

As you can see from the previous picture, the ISession object is able to contains two different categories of entities, the one that we define “loaded” using Get or Load and the one that we define “dirty”, which means that they were somehow modified and associated with a session.

Load and Get, what’s the difference?

A major confusion I personally noticed while working with NHibernate is the not correct usage of the two methods Get and Load so let’s see for a moment how they work and when they should or should not be used.

Get

Load

 

Fetch method

Retrieve the entire entity in one SELECT statement and puts the entity in the cache.

Retrieve only the ID of the entity and returns a non fetched proxy instance of the entity. As soon as you “hit” a property, the entity is loaded.

How it loads

It verifies if the entity is in the cache, otherwise it tries to execute a SELECT

It verifies if the entity is in the cache, otherwise it tries to execute a SELECT

Not Available

If the entity does not exist, it returns NULL

If the entity does not exist, it THROW an exception

 

I personally prefer Get because it returns NULL instead of throwing a nasty exception, but this is a personal choice; while some of you may prefer to use Load because you want to avoid a database call until is really needed.

Below I wrote a couple of very simple tests to show you how the Get and Load methods work across the same ISession.

Get<T>()

   1:  [Test]
   2:  [Category("Database")]
   3:  public void UsingGetThePersonIsFullyCached()
   4:  {
   5:      using (var session = factory.OpenSession())
   6:      {
   7:          using (var tx = session.BeginTransaction(IsolationLevel.ReadCommitted))
   8:          {
   9:              var persons = MockFactory.MakePersons();
  10:              persons.ForEach(p => session.Save(p));
  11:              tx.Commit();
  12:              session.Clear();
  13:              Console.WriteLine("*** FIRST SELECT ***");
  14:              var expectedPerson1 = session.Get<Person>(persons[0].PersonId);
  15:              Assert.That(expectedPerson1, Is.Not.Null);
  16:              Console.WriteLine("*** SECOND SELECT ***");
  17:              var expectedPerson2 = session.Get<Person>(persons[0].PersonId);
  18:              Assert.That(expectedPerson2, Is.Not.Null);
  19:              Assert.That(expectedPerson2.FirstName, Is.Not.EqualTo(string.Empty));
  20:          }
  21:      }
  22:  }

In this test I have created a list of Persons in one transaction and then I cleared the session in order to be sure that nothing was left in the cache. Then I loaded one of the Person entities using the Get<T> method and then I load it again using the same method call in order to verify that the SELECT statement was issued only once.

image

As you can see, NHibernate is loading the entire entity from the database in the first call, and in the second one is simply loading it again from the level 1 cache. You should notice here that NHibernate is loading the entire entity in the first Get<T> call.

Load<T>()

[Test]
[Category("Database")]
public void UsingLoadThePersonIsPartiallyCached()
{
    using (var session = factory.OpenSession())
    {
        using (var tx = session.BeginTransaction(IsolationLevel.ReadCommitted))
        {
            var persons = MockFactory.MakePersons();
            persons.ForEach(p => session.Save(p));
            tx.Commit();
            session.Clear();
            Console.WriteLine("*** FIRST SELECT ***");
            var expectedPerson1 = session.Load<Person>(persons[0].PersonId);
            Assert.That(expectedPerson1, Is.Not.Null);
            Console.WriteLine("*** SECOND SELECT ***");
            var expectedPerson2 = session.Load<Person>(persons[0].PersonId);
            Assert.That(expectedPerson2, Is.Not.Null);
            Assert.That(expectedPerson2.FirstName, Is.Not.EqualTo(string.Empty));
        }
    }
}

In this second test I am executing the same exact steps of the previous one, but this time I am using the Load<T> method and the result is completely different! Look at the SQL log below:

image

Now NHibernate is not loading the entity from the database at all, it is loading it only in the second call, when I try to hit one of the Person properties. If you debug this code you will notice that NHibernate issues the database call at the line Assert.That(expectedPerson2.FirstName, Is.Not.EqualTo(string.Empty)); and not before!

Session maintenance

If you are working with the ISession object in a Client application or if you are keeping it alive in a web application using some strange behaviors like keeping it saved inside the HttpContext you will realize, soon or later, that sometimes the cache of level 1 needs to be cleared.

Now, despite the fact that these methods (based on my personal experience) should never be used, because it means that you are wrongly implementing your data layer, and despite the fact that the behavior of these methods may result in something unexpected, NHibernate provides three different methods to clear the cache of level 1 content.

 

Session.Clear

 

Session.Evict

 

Session.Flush

 
             
 

Removed all the existing objects from the ISession without syncing them with the database

Remove a specific object from the ISession without syncing it with the database

Remove all the existing objects from the session by syncing them with the database

I will probably write more about these three methods in some future post but if you need to investigate more about them, I would suggest you to read carefully the NHibernate docs available here: http://www.nhforge.org/doc/nh/en/index.html

In the next article we talk about the level 2 cache.

Sharing assembly version in Visual Studio 2010.

Last week I came up with a fancy requirement that forced me to struggle a little bit in order to find an appropriate solution. Let’s say that we have a massive solution file, containing something like 100ish projects and we would like to keep the same assembly version number for all these projects.

In this article I will show you how the assembly version number works in .NET and what are the possible solutions, using Visual Studio.

Assembly version in .NET

As soon as you add a new project (of any type) in Visual Studio 2010, you will come up with a default template that contains also a file “AssemblyInfo.cs” if you are working with C# or “AssemblyInfo.vb” if you are working with VB.NET.

image

If we look at the content of this file we will discover that it contains a set of attributes used by MSBuild to prepare the assembly file (.dll or .EXE) with the information provided in this file. In order to change this information we have two options:

  1. Edit the AssemblyInfo.cs using the Visual Studio editor.
    In this case we are interested in the following attributes, that we will need to change every time we want to increase the assembly version number:
       1:  using System.Reflection;
       2:  using System.Runtime.CompilerServices;
       3:  using System.Runtime.InteropServices;
       4:   
       5:  [assembly: AssemblyVersion("1.0.0.0")]
       6:  [assembly: AssemblyFileVersion("1.0.0.0")]

  2. Or, we can open the Project properties window from Visual Studio using the shortcut ALT+ENTER or by choosing “properties” of a VS project file from the Solution Explorer

    image

How does the versioning work?

The first thing that I tried was to understand exactly how this magic number works in .NET.

If you go to the online MSDN article, you will find out that the version number of an assembly is composed by 4 numbers, and each one has a specific mean

1. Major = manually incremented for major releases, such as adding many new features to the solution.

0. Minor = manually incremented for minor releases, such as introducing small changes to existing features.

0. Build = typically incremented automatically as part of every build performed on the Build Server. This allows each build to be tracked and tested.

0 Revision = incremented for QFEs (a.k.a. “hotfixes” or patches) to builds released into the Production environment (PROD). This is set to zero for the initial release of any major/minor version of the solution.

Two different assembly version attributes, why?

I noticed that the [assembly] attribute class exposes two different properties, Assembly Version and Assembly File Version.

AssemblyFileVersion

This attribute should be incremented every time our build server (TFS) runs a build. Based on the previous description you should increase the third number, the build version number. This attribute should be placed in a different .cs file for each project to allow full control of it.

AssemblyVersion

This attributes represents the version of the NET assembly you are referencing in your projects. If you increase this number in every TFS build, you will incur in the problem of changing your reference redirect every time the assembly version is increased.

This number should be increased only when you release a new version of your assembly and it should be increase following the assembly versioning terminology (major, minor, …)

Control the Versioning in Visual Studio

As I said before VS allows us to control the version number in different ways and in my opinion using the properties window is the easiest one. As soon as you change one of the version numbers from the properties window, also the AssembliInfo.cs file will be automatically changed.

But what happens if we delete the version attributes from the assembly info file? As expected VS will create an assembly with version 0.0.0.0 like the picture below:

image

Note: if we open the Visual Studio properties window for the project and we write down the version 1.0.0.1 for both, Assembly and AssemblyFile attribute, VS will re-create these two attributes in the AssemblyInfo.cs file.

Sharing a common Assembly version on multiple projects

Going back to the request I got, how can we setup a configuration in Visual Studio that allows us to share on multiple projects the same assembly version? A partial solution can be accomplished using shared linked files on Visual Studio.

Ok, what’s a shared linked file, first of all? A linked file is a file shortcut that points in multiple projects to the same single file instance. A detailed explanation of this mechanism is available on Jeremy Jameson’s blog at this page.

Now, this is the solution I have created as an example where I share an AssemblyVersion.cs file and an AssemblyFileVersion.cs file to the entire Visual Studio solution.

image

Using this approach we have one single place where we can edit the AssemblyFileVersion and the AssemblyVersion attributes. In order to accomplish this solution you need to perform the following steps:

  1. Delete the assembly version and the assembly file version attributes for all the existing AssemblyInfo.cs files
  2. Create in one project (the root project) a file called AssemblyFileVersion.cs containing only the attribute AssemblyFileVersion
  3. Create in one project (the root project) a file called AssemblyVersion.cs containing only the attribute AssenblyVersion
  4. Add as linked files these two files to all the existing projects
  5. Re-Build everything

Final note on Visual Studio properties window

Even if my root project has now two files with the attributes AssemblyFileVersion and AssemblyVersion, when I open the Visual Studio properties window, it tries to search for these attributes in the AssemblyInfo.cs file, and clearly, it can’t find them anymore, so it does not display anything:

image

If you add a value to these textboxes Visual Studio will re-create the two attributes in the AssemblyInfo.cs file without taking care of the two new files we have created and as soon as you try to compile the project you will receive this nice error:

image

So, in order to use this solution you need to keep in mind that you can’t edit the AssemblyFileVersion and the AssemblyVersion attributes from the VS properties window if they are not saved in the AssemblyInfo.cs file!

I believe that MS should change this in the next versions of Visual Studio.

Winking smile

Speaking about WPF in Bermuda.

Last year in Bermuda we had a new born, a NET community. The community has been created by some locals companies to attract developers, architects and analyst; but also anybody passionate of development.

You can check-out the web site here: http://www.dnug.bm/.

This month I will have the opportunity to present my two books:

and to talk about WPF/Silverlight.

During this event I will explain what are the common problems you may encounter when moving from legacy applications to WPF/Silverlight. When you should and when you should not migrate an application to a new UI technology. When you should use WPF and when you should use Silverlight. Oh and I will also give some free copies of my books plus some additional discounts.

This is the page of the event: http://www.dnug.bm/index.php?option=com_jevents&task=icalrepeat.detail&evid=5&Itemid=58&year=2011&month=07&day=21&title=july-2011-dot-net-user-group-event&uid=a620ff387a7449aaad68443c9780f7c2

If you are planning a vacation around the 21st of July, please come here and join us in this first event about WPF/Silverlight.

Hope to see you there!

PS: I forgot to mention that one of the amazing price for the attendees is to win a 1 year subscription to MSDN.