Month: December 2011

NHibernate cache system. Part 3

In this series of articles we saw how the cache system is implemented in NHibernate and what can we do in order to use it. We also saw that we can choose a cache provider but we didn’t have a look yet at what providers we can use.

I personally have my own opinion about the 2nd level cache providers available for NHibernate 3.2 and I am more than happy if you would like to share with me your experience about it.

Below is a list of the major and the most famous 2nd level cache providers I know:


I would personally suggest you to answer the following questions in order to understand what is the cache provider you need for your project:

  • Size of you project (small, standard, enterprise)
  • Cache technology you have already in place
  • Quality attributes required by your solution (scalability, security, …)


SysCache and the most recent SysCache2 is a cache provider built on top of the old ASP.NET cache system. It is available from ASP.NET cache provider (NHibernate.Cache.SysCache.dll)

It is an abstraction over ASP.NET cache so it can’t be used in a non-web application. It works but Microsoft suggest to do not use it for non-web applications.

The cache space is not configurable so it is the same for different session factories … really dangerous on a web application that requires isolation between the various users (

Useful for: small in house projects, better for web projects hosted in IIS


NCache is a distributed in-memory Object cache and a distributed ASP.NET Session State manager product.

It is able to synchronizes cache across multiple servers so it is designed also for the enterprise.

It provides dynamic clustering & cache configuration for 100% uptime for a real scalable architecture.

Cache reliability through data replication across servers

InProc/OutProc cache for multiple processes on same machine

API identical to ASP.NET Cache

It is available for download and trial here, it is a third party provider and it is not free.

Useful for: medium to big applications that are designed to be scalable


MemCache is a famous Linux cache system designed exclusively for the enterprise. It is a complex enterprise cache system based on Linux platform that provide a cache mechanism also for NHibernate.
It is pretty easy to be scaled on a big server farm because it is designed to do so
It does not require licensing cost because it’s an OSS and it is a well known system with a big community.

The following picture represents the core logic of MemCache:


The only downside is that it requires a medium knowledge of Linux OS in order to be able to install and configured it.

Useful for: enterprise applications that are designed to be scalable

Velocity, a.k.a. AppFabric

Velocity has been now integrated in AppFabric and it is the cache system implemented by Microsoft for the enterprise. It requires AppFabric and IIS and it can be used locally or with Azure (does it make sense to cache in the cloud?? Flirt male).

  • AppFabric Caching, provides local caching, bulk updates, callbacks for updates, etc… so this is why it’s exciting over something like MemCache which doesn’t provide these features Out of the Box.
  • For enterprise architectures, really scalable, Microsoft product (there may be a license requirement)

Useful for: enterprise applications that are designed to be scalable

NHibernate cache system. Part 2

In the previous post we saw how the cache system is structured in NHibernate and how it works. We saw that we have different methods to play with the cache (Evict, Clear, Flush …) and they are all associated with the ISession object because the cache of level 1 is associated with the lifecycle of an ISession object.

In this second article we will see how the second level cache works and how it is associated with the ISessionFactory object that is in charge of controlling this cache mechanism.

Second Level cache architecture

How does the second level cache work?


First of all, when an entity is cached in the second level cache, the entity is disassembled into a collection of keys/values pair, like a dictionary and persisted in the cache repository. This mechanism is accomplished because most of the second level cache providers are able to persist serialized dictionary collections and because in the same time NHibernate does not force you to make serializable your entities (something that IMHO, should never be done!!).

A second mechanism happens when we cache the result of a query (Linq, HQL, ICriteria) because these results can’t be cached using the first level cache (see previous blog post). After we cache a query result, NHibernate will cache only the unique identifiers of the entities involved in the result of the query.

Third, NHibernate has an internal mechanism that allows him to know and keep track of a timestamp value used to write tables or to work with sessions. How does it work? Well the mechanism is pretty clear, it keeps track of when the last table was written too. A series of mechanism will update this timestamp information and you can find a better explanation of Ayende’s blog:

Configuration of the second level cache

By default the second level cache is disabled. If you need to use the second level cache you have to let NHibernate know about that. The hibernate.cfg file has a dedicated section of parameters that should be used to enable the second level cache:

First of all we specify the cache provider we are using, in this case I am using the standard hashtable provider, but I will show you in the next article what are the real providers you should use. Second we say that the cache should be enabled; this part is really important because if you do not specify that the cache is enable, it simply won’t work … Confused smile

Then you may provide to the cache a default expiration in seconds:

If you want to add additional configuration properties, they will be cache provider specific!

Cache by mapping

One of the possible configuration is to enable the cache at the entity level. This means that we are marking our entity as “cachable”.

In order to do that we have to introduce a new tag, the <cache> tag. In this tag we can specify different type of “”usage”:

  • Read-write

    It should be used if you plan also to update the data (no with serializable transaction)

  • Read-only

    Simplest and best performing, for read only access

  • Nonstrict-read-write

    If you need to occasionally update the data. You must commit the transaction

  • Transactional

    not documented/implemented yet because no one cache provider allows transactional cache. It is implemented in the Java version because J2EE allow transactional second level cache

Now, if we write a simple test that will create some entities and will try to retrieve them using two different ISession generated by the same ISessionFactory we will get the following behavior:

The result will be the following:


As you can see the second session will access the 2nd level cache using a transaction and will not use the database at all. This has been accomplished just by mapping the entity with the <cache> tag and by using the GET<T> method.

Let’s make everything a little bit more complex. Let’s assume for a second that our object is an aggregate root and it is more complex than the previous one. If we want to cache also a collection of child or a parent reference we will need to change our mapping in the following way:

Now we can execute the following test (I am omitting some parts for saving space, I hope you don’t mind …)

And this is the result from the profiled SQL:


In this case the second ISession is calling the cache 4 times in order to resolve all the objects (2 products x 2 categories).

Cache a query result

Another way to cache our result is by creating a cachable query that is slightly different than creating a cachable object.

Important note:

In order to cache a query we need to set the query as “cachable” and then set the corresponding entity as “cachable” too. Otherwise NHB will cache the ID of the entity but then it will always fetch the entity and cache only the query result.

To write a cachable query we need to implement an IQuery object in the following way:

Now, let’s try to write a unit test for this:

And this is the expected result:


In this case the cache is telling us that the second session has 1 query result cached and that we called it once.

Final advice

As you saw using the 1st and 2nd level cache is a pretty straightforward process but it requires time and understanding of NHibernate cache mechanism. Below are some final advice that you should keep in consideration when working with the 2nd level cache:

  • 2nd Level Cache is never aware of external database changes!
  • Default cache system is hashtable, you must use a different one
  • Wrong implementation of the 2nd level cache may result in a non expected performance degrade (i.e. hashtable doc)
  • First level cache is shared across same ISession, second level is shared across same ISessionFactory

In the next article we will see what are the available cache providers.

NHibernate cache system. Part 1

In this series of articles I will try to explain you how NHibernate cache system works and how it should be used in order to get the best performance/configuration from this product.

NHibernate Cache architecture

NHibernate has an internal cache architecture that I will define absolutely well done. On an architectural point of view, it is designed for the enterprise and it is 100% configurable. Consider that it allows you to create also your custom cache provider!

The following picture show the cache architecture overview of NHibernate (actually the version I am talking about is the 3.2 GA).


The cache system is composed by two levels, the cache of level 1 that usually it is configured by default if you are working with the ISession object, and the cache of level 2 that by default is disabled.

The cache of level 1 is provided by the ISession data context and it is maintained by the lifecycle of the ISession object, this means that as soon as you destroy (dispose) an ISession object, also the cache of level 1 will be destroyed and all the corresponding objects will be detached from the ISession. This cache system works on a per transaction basis and it is designed to reduce the number of database calls during the lifecycle of an ISession. As an example, you should use this cache if you have the need to access and modify an object in a transaction, multiple times.

The cache of level 2 is provided by the ISessionFactory component and it is shared across all the session created using the same factory. Its lifecycle correspond to the lifecycle of the session factory and it provides a more powerful but also dangerous set of features. It allows you to keep objects in cache across multiple transactions and sessions; the objects are available everywhere and not only on the client that is using a specific ISession

Cache Level 1 mechanism

As soon as you start to create a new ISession (not an IStatelessSession!!) NHibernate starts to holds in memory, using a specific mechanism, all the objects that are involved with the current session. The methods used by NHibernate to load the data into the cache are two: Get<T>(id) and Load<T>(id). This means that if you try to load one or more entities using: LinQ, HQL, ICriteria … NHibernate will not put them into the cache of level 1.

Another way to put an object into the lvl1 cache is to use persistence methods like Save, Delete, Update and SaveOrUpdate.


As you can see from the previous picture, the ISession object is able to contains two different categories of entities, the one that we define “loaded” using Get or Load and the one that we define “dirty”, which means that they were somehow modified and associated with a session.

Load and Get, what’s the difference?

A major confusion I personally noticed while working with NHibernate is the not correct usage of the two methods Get and Load so let’s see for a moment how they work and when they should or should not be used.




Fetch method

Retrieve the entire entity in one SELECT statement and puts the entity in the cache.

Retrieve only the ID of the entity and returns a non fetched proxy instance of the entity. As soon as you “hit” a property, the entity is loaded.

How it loads

It verifies if the entity is in the cache, otherwise it tries to execute a SELECT

It verifies if the entity is in the cache, otherwise it tries to execute a SELECT

Not Available

If the entity does not exist, it returns NULL

If the entity does not exist, it THROW an exception


I personally prefer Get because it returns NULL instead of throwing a nasty exception, but this is a personal choice; while some of you may prefer to use Load because you want to avoid a database call until is really needed.

Below I wrote a couple of very simple tests to show you how the Get and Load methods work across the same ISession.


In this test I have created a list of Persons in one transaction and then I cleared the session in order to be sure that nothing was left in the cache. Then I loaded one of the Person entities using the Get<T> method and then I load it again using the same method call in order to verify that the SELECT statement was issued only once.


As you can see, NHibernate is loading the entire entity from the database in the first call, and in the second one is simply loading it again from the level 1 cache. You should notice here that NHibernate is loading the entire entity in the first Get<T> call.


In this second test I am executing the same exact steps of the previous one, but this time I am using the Load<T> method and the result is completely different! Look at the SQL log below:


Now NHibernate is not loading the entity from the database at all, it is loading it only in the second call, when I try to hit one of the Person properties. If you debug this code you will notice that NHibernate issues the database call at the line Assert.That(expectedPerson2.FirstName, Is.Not.EqualTo(string.Empty)); and not before!

Session maintenance

If you are working with the ISession object in a Client application or if you are keeping it alive in a web application using some strange behaviors like keeping it saved inside the HttpContext you will realize, soon or later, that sometimes the cache of level 1 needs to be cleared.

Now, despite the fact that these methods (based on my personal experience) should never be used, because it means that you are wrongly implementing your data layer, and despite the fact that the behavior of these methods may result in something unexpected, NHibernate provides three different methods to clear the cache of level 1 content.








Removed all the existing objects from the ISession without syncing them with the database

Remove a specific object from the ISession without syncing it with the database

Remove all the existing objects from the session by syncing them with the database

I will probably write more about these three methods in some future post but if you need to investigate more about them, I would suggest you to read carefully the NHibernate docs available here:

In the next article we talk about the level 2 cache.

Sharing assembly version in Visual Studio 2010.

Last week I came up with a fancy requirement that forced me to struggle a little bit in order to find an appropriate solution. Let’s say that we have a massive solution file, containing something like 100ish projects and we would like to keep the same assembly version number for all these projects.

In this article I will show you how the assembly version number works in .NET and what are the possible solutions, using Visual Studio.

Assembly version in .NET

As soon as you add a new project (of any type) in Visual Studio 2010, you will come up with a default template that contains also a file “AssemblyInfo.cs” if you are working with C# or “AssemblyInfo.vb” if you are working with VB.NET.


If we look at the content of this file we will discover that it contains a set of attributes used by MSBuild to prepare the assembly file (.dll or .EXE) with the information provided in this file. In order to change this information we have two options:

  1. Edit the AssemblyInfo.cs using the Visual Studio editor.
    In this case we are interested in the following attributes, that we will need to change every time we want to increase the assembly version number:

  2. Or, we can open the Project properties window from Visual Studio using the shortcut ALT+ENTER or by choosing “properties” of a VS project file from the Solution Explorer


How does the versioning work?

The first thing that I tried was to understand exactly how this magic number works in .NET.

If you go to the online MSDN article, you will find out that the version number of an assembly is composed by 4 numbers, and each one has a specific mean

1. Major = manually incremented for major releases, such as adding many new features to the solution.

0. Minor = manually incremented for minor releases, such as introducing small changes to existing features.

0. Build = typically incremented automatically as part of every build performed on the Build Server. This allows each build to be tracked and tested.

0 Revision = incremented for QFEs (a.k.a. “hotfixes” or patches) to builds released into the Production environment (PROD). This is set to zero for the initial release of any major/minor version of the solution.

Two different assembly version attributes, why?

I noticed that the [assembly] attribute class exposes two different properties, Assembly Version and Assembly File Version.


This attribute should be incremented every time our build server (TFS) runs a build. Based on the previous description you should increase the third number, the build version number. This attribute should be placed in a different .cs file for each project to allow full control of it.


This attributes represents the version of the NET assembly you are referencing in your projects. If you increase this number in every TFS build, you will incur in the problem of changing your reference redirect every time the assembly version is increased.

This number should be increased only when you release a new version of your assembly and it should be increase following the assembly versioning terminology (major, minor, …)

Control the Versioning in Visual Studio

As I said before VS allows us to control the version number in different ways and in my opinion using the properties window is the easiest one. As soon as you change one of the version numbers from the properties window, also the AssembliInfo.cs file will be automatically changed.

But what happens if we delete the version attributes from the assembly info file? As expected VS will create an assembly with version like the picture below:


Note: if we open the Visual Studio properties window for the project and we write down the version for both, Assembly and AssemblyFile attribute, VS will re-create these two attributes in the AssemblyInfo.cs file.

Sharing a common Assembly version on multiple projects

Going back to the request I got, how can we setup a configuration in Visual Studio that allows us to share on multiple projects the same assembly version? A partial solution can be accomplished using shared linked files on Visual Studio.

Ok, what’s a shared linked file, first of all? A linked file is a file shortcut that points in multiple projects to the same single file instance. A detailed explanation of this mechanism is available on Jeremy Jameson’s blog at this page.

Now, this is the solution I have created as an example where I share an AssemblyVersion.cs file and an AssemblyFileVersion.cs file to the entire Visual Studio solution.


Using this approach we have one single place where we can edit the AssemblyFileVersion and the AssemblyVersion attributes. In order to accomplish this solution you need to perform the following steps:

  1. Delete the assembly version and the assembly file version attributes for all the existing AssemblyInfo.cs files
  2. Create in one project (the root project) a file called AssemblyFileVersion.cs containing only the attribute AssemblyFileVersion
  3. Create in one project (the root project) a file called AssemblyVersion.cs containing only the attribute AssenblyVersion
  4. Add as linked files these two files to all the existing projects
  5. Re-Build everything

Final note on Visual Studio properties window

Even if my root project has now two files with the attributes AssemblyFileVersion and AssemblyVersion, when I open the Visual Studio properties window, it tries to search for these attributes in the AssemblyInfo.cs file, and clearly, it can’t find them anymore, so it does not display anything:


If you add a value to these textboxes Visual Studio will re-create the two attributes in the AssemblyInfo.cs file without taking care of the two new files we have created and as soon as you try to compile the project you will receive this nice error:


So, in order to use this solution you need to keep in mind that you can’t edit the AssemblyFileVersion and the AssemblyVersion attributes from the VS properties window if they are not saved in the AssemblyInfo.cs file!

I believe that MS should change this in the next versions of Visual Studio.

Winking smile