Building a Generic Repository

NOTE: This post is part of a series of posts for defining a repository. Please follow the table of contents for other related posts.

Table Of Contents

  1. Setting up the Solution

Setting up a basic test project

Okay, we have our basic interfaces defined. Now would be a good time to set up our test project.

In our previous post we added a test project to the our solution – Atl.Repository.Test. and also installed the following two packages –

PM> install-package xunit
PM> install-package xunit.runner.visualstudio

Yup, we are going to use xunit to run our test cases.

So far, we didn’t have any concrete domain implemented to write test cases. So, we are going to do it first. We will use a very simple domain entity, with one additional property.

We will try with complex domains later on. But for now let’s go with the easy one.

Our domain will be Organization and it looks like this –

namespace Atl.Repository.Domains.Organizations
{
    public class Organization : IDomain
    {
        public string Name { get; set; }

        #region IDomain Implementation

        public int Id { get; set; }
        public bool IsActive { get; set; }
        public bool IsDeleted { get; set; }
        public bool IsLocked { get; set; }
        public bool IsArchived { get; set; }
        public bool IsSuspended { get; set; }

        #endregion
    }
}

And yeah, it is as simple as it can get, with only one additional property – Name.

Okay, back to test project, we are going to write some test cases for CRUD operations with Organization.

Adding Populator

The purpose of a populator is providing us with test domains, models or simply POCO objects with predicable and human readable random values.

This step might seem unnecessary, but this is going to help us organize and test our repository in a very good way. For now, we only have one domain to test, but later on, we will have hundreds of domains to test or may be combinations of domains to test. We, are going to set up populators so that we will always get a predicable flow of test domains when we need them.

I am going to use Bogus for faking the domains, so let’s install Bogus –

PM> install-package bogus

Bogus is a simple and sane fake data generator for C# and .NET. If you are interested, you can get to know it here.

Even though we need populator for each domain, we are not going to provide populator that way. Instead, we are going to use a hierarchical populator structure, a composition of both Strategy & Decorator design pattern.

Read more about Strategy Pattern and Decorator Pattern.

Firstly, we define our populator interface 

namespace Atl.Repository.Test.Popoulators.Contracts
{
    public interface IPopulator where T: IDomain
    {
        T Populate();
        T Populate(T obj);
    }
}

So, our populator comes with two methods to populate an object. Now we add the base populator from which all other populator will be derived –

    public class BasePopulator : IPopulator where T: IDomain
    {
        public virtual T Populate()
        {
            var obj = Activator.CreateInstance<T>();
            return Populate(obj);
        }
        
        public virtual T Populate(T obj)
        {
            //populate common properties
            obj.IsActive = true;
            obj.IsDeleted = false;
            obj.IsArchived = false;
            obj.IsLocked = false;
            obj.IsSuspended = false;

            return obj;
        }
    }

Finally we add the customized populator that only deals with Organization –

 public class OrganizationPopulator : BasePopulator
    {
        public override Organization Populate()
        {
            var fakeOrg = new Faker()
                .RuleFor(x => x.Name, f => f.PickRandom(new string[] {"Apple", "Microsoft", "Atlassian"}));

            return base.Populate(fakeOrg.Generate());
        }

        
        public override Organization Populate(Organization obj)
        {
            //populate baase properties
            return base.Populate(obj);
        }
    }

basically, what we did here is –

  1. Have overridden the populate method to put our own values for the Name property
  2. Told bogus, to create a new Faker to create an Organization object, with any of these names – “Apple”, “Microsoft”, “Atlassian”.
  3. Created the faked object and passed it to the base populator, to populate the common properties and return the object.

In the end, we have a randomly generated object but with predicable name. That is the purpose of a populator.

Testing the Populator

Now lets test our newly created populator, to make sure the it is generating the values as expected. We are going the add a test method to test out populator to see if the generated name falls into our defined range i.e. “Apple”, “Microsoft”, “Atlassian” –

Xunit, has a great option to inject classes at test time. I am going to use that feature and inject the poplator. You can read more about ClassFixture here.

    public class WriteRepositoryTest : IClassFixture
    {
        private readonly OrganizationPopulator _organizationPopulator;

        public WriteRepositoryTest(OrganizationPopulator organizationPopulator)
        {
            _organizationPopulator = organizationPopulator;
        }

        [Fact]
        public void GenerateOrganization()
        {
            var org = _organizationPopulator.Populate();

            Assert.True(new string[] {"Apple", "Microsoft", "Atlassian"}.Contains(org.Name), "Didn't generate expected organization.");
        }

        [Fact]
        public void CreateSimpleObject()
        {

        }
    }

Now we run the test using the integrated test explorer and Voila! Passed.

2017-08-13_9-31-55

By this time the test project looks like this  (the test class not shown) –

2017-08-13_9-33-28

(to be continued….)

Advertisements
Posted in ADO.Net Entity Framework, C# .Net, Uncategorized | 1 Comment

Building a Generic Repository

NOTE: This post is part of a series of posts for defining a repository. Please follow the table of contents for other related posts.

Table Of Contents

  1. Setting up the Solution

Setting up the Solution

Hi, it has been loooooooooooong since my last post. I am going to start a new series of articles and today is the first – to build a generic repository that can work with any kind of domain as oppose to individual repository per domain. Lets build the repository.

  1. I usually start with a blank solution, lets name it Atl.GenericRepostitory.
  2. Add 2 projects
    1. A Class library project to containing the repository. Lets name it Atl.Repository.
    2. A unit test project name Atl.Repository.Test.
  3. I am going to use xUnit for unit testing. So lets install them right away.  Fire up nuget package manager console and install the following 2 packages, one for xUnit and the other to enable running xUnit test cases from visual studio –
    PM> install-package xunit
    PM> install-package xunit.runner.visualstudio
  4. At the end you will end up with a solution with two projects and one of them is setup to run xunit tests –
    1

Defining the Interfaces for the Repository

Now, let us start the interesting part.

Before we begin defining our interfaces, we need to make sure our definition of repository contains all the basic functionality needed and in addition supports the following things –

  1.  The repository should be able to support any UML Patterns used to design the business layer that communicates with the Repository and thus be able to handle Read/Write commands sequentially or non-sequentially. One particular example of this scenario is that the repository could be used in a Service Layer Pattern and at the same time some other code base might use CQRS pattern.
  2. While writing to a repository required atomic action, reading from a repository does not necessarily need that. Thus we can increase system performance significantly by separating ReadOnly and WriteOnly repository.
  3. To be able to create ReadOnly and WriteOnly repositories we need to be able to separate responsibilities of a whole Repository in 3 groups –
    1. Responsibilities while reading contents
    2. Responsibilities while writing/saving/updating contents
    3. Responsibilities while deleting contents

Keeping all these in mind we can come up with the following structure and inheritance of interfaces for the repository.

  1. The root is the IRepository, the generic template parameter for this is IDomain. A signature of the actual domain as of now and later we will add more properties to the IDomain object.
  2. The generic implemenation of IRepository is going to be IGenericRepository where T: IDomain.
  3. Three Individual interfaces inheriting GenericRepository to provide robust implementation but yet clustering codes into different familiar groups. For now, we will go with 3 interfaces –
    1. IWriteRepository
    2. IReadRepository
    3. IDeleteRepository

the interface definition of the repository will look like this –

IRepository

public interface IRepository { }

IGenericRepository    

 
 
public interface IGenericRepository<T> : IRepository where T: IDomain { }

And the other three repositories –

 public interface IReadRepository<T> : IGenericRepository<T> where T : IDomain
  {
     T GetById(int id);
     IEnumerable GetAll<T>();
  }
 
  public interface IWriteRepository<T> : IGenericRepository<T> where T : IDomain
  {
     T Add(T obj);
     T Update(T obj);
  }
 
  public interface IDeleteRepository<T> : IGenericRepository<T> where T : IDomain
  {
     //the deletion method
     void Delete(int id);
     void Delete(T obj);
  }
}

Now lets define the concrete class Repository –

public partial class Repository<T> : IGenericRepository<T> where T : IDomain
{
    private readonly DbContext _dbContext;

    public Repository()
    {

    }

    public Repository(DbContext context)
    {
        _dbContext = context;
    }
}
 
 public partial class Repository<T> : IReadRepository<T> where T: IDomain
 {
     public T GetById(int id)
     {
         throw new NotImplementedException();
     }
 
     public IEnumerable<T> GetAll()
     {
         throw new NotImplementedException();
     }
 }
 public partial class Repository<T> : IWriteRepository<T> where T : IDomain
 {
     public T Add(T obj)
     {
         throw new NotImplementedException();
     }
 
     public T Update(T obj)
     {
         throw new NotImplementedException();
     }
 }
public partial class Repository<T> : IDeleteRepository<T> where T : IDomain
{
    public void Delete(int id)
    {
        throw new NotImplementedException();
    }

    public void Delete(T obj)
    {
        throw new NotImplementedException();
    }
}

The reason for using partial classes for individual repository types is to take advantage of organizing the repository methods separately, otherwise the repository will look like a mammoth. Each of the methods defined will have much more siblings coming along.

At this point your solution should look like this –

2.PNG

Note: Please ignore DatabaseContext/IDatabaseContext, we will talk about it later.

Next Post: Let’s set up the test project

Posted in ADO.Net Entity Framework, C# .Net | Tagged , , , , | 1 Comment

Castle Windsor, Service Locator and a possible memory leak !!!

In recent days, in one of my projects, we found an innocently looking bug that was a little too overweight – sometimes weighing more than 6GB in size in a 8GB web server. Well, a memory leak that caused the application pool to crash by exhausting the system memory. So I thought that, why not share it here so that if anyone else, facing the same problem, can get some insight. Here goes the plot –

The main goal of the project was to build a modular framework to be used to build robust web applications easily. For that reason the most obvious choice was to use Inversion of Control Pattern.

If you are already here then it is highly normal that you are familiar with the IOC( Inversion of Control) pattern already, but if you are not, then here is a good article for you.

As you all know, castle windsor is a widely used service locator library we used it for our IOC implementation. Therefore, we registered our services like this –

_container.AddComponentLifeStyle<IDocumentService, DocumentService>(LifestyleType.Transient);

If you are new to castle windsor and the life cycle of the items it resolves, grab its details from here.

In short, LifestyleType.Transient means each time you ask castle windsor to resolve or give you an instance of an interface it will create a new one each time. But the default implementation of castle windsor with transient life cycle does the following –

  • It creates a new instance each time ask for
  • Keeps a reference of that instance created, so that it can refer it later
  • Assumes that you will release the instance manually, again remember it, manually.

The first two options is not a problem, but the third one is a possible leak. If in any case you do not release the instance manually its a never-ending memory leak. Because the default garbage collector of CLR will never clear it since castle windsor is holding a reference and castle will never clear it because it thinks you will clear it yourself.  That’s what we exactly did and well it took only 2 hours to consume all the server memory (6GB +) and crash it.

If you are interested about the possible memory management scenarios, see this article here, I realized our problem reading this one.

Well, now comes the big question – Whats the easiest solution?

Well you can release all the instance manually that you resolved using castle windsor or you can grab the solution mention here.

I will save your time. Just in the castle service locator where you are creating the kernel, add the following lines –

_container.Kernel.ReleasePolicy = new NoTrackingReleasePolicy();

Basically what it does is that, it prevents castle windsor from keeping any references of resolved instances and thus when you are done with your code and the object needs releasing the default GC collects it and the memory is freed. Which removed the memory leak problem. And you know what now the memory consumption never goes over 600MB. 🙂

It’s highly usual that you are using NHibernate with castle windsor and you think NHibernate is causing the leak?.. well don’t be harsh on NHibernate ; its castle and your unreleased instances, who is causing the leak. 😀

Posted in C# .Net, Castle, Native and Third Party Frameworks for .NET | Tagged , , , , , | Leave a comment

Managed Extensibility Framework –

Introduction

Before I begin lets discuss about a common development scenario and how we might deal with it –

Suppose we have a requirement as follows (This was a real-time scenario once for me )-

  1. The system will collect table data from flat files,  like – plain text (txt).
  2. The task is to collect column information and show the list of columns in a Grid.

Now, lets consider we have started our development process and also completed it and delivered. Your client is happy and so are you.

Now, comes the tricky part. A few days later your happy client asks –

 could you please add an option so that we can also parse Excel WorkSheets, such as xls, xlsx, xlsm, xlsb files or may be delimited CSV files?

If you never anticipated this would happen, I assume you didn’t keep an option to add a parser later. Now what should you do? You will open the source again and modify it and them again build and deliver it. For a small project like this it is still troublesome, there are other projects with millions of lines of codes. What will you do with them? Open the source again and build?

Oh Boy, ain’t that tiresome!!!

Lets, talk about another approach –

Instead of hard coding the parser in the application could we could have done the following –

  1. Create a Plugin directory and keep a parser as a portable assembly (.dll) which will have a generic interface to expose methods.
  2. When executed the program will search for available plugins in that folder and will see that it has Text (.txt) file parser.
  3. Then the program will issue commands or all methods based on exposed interface of the parser.

Now, in this situation even if my client wants more parser, we can just build a new one implementing the base interface and build it as a class library and finally just put it in the Plugin folder and the parse will work out of the box.

Now, I think you have got a rough idea of what I am talking about. So lets, just mention it in words –

Using Managed Extensibility Framework(a.k.a MEF) we can do the above mentioned thing in minutes. In other words, we can inject business logic right into the application and will not even need rebuild it.

Before we start coding, you might wanna know where to get it. If you are using .net framework 4.0 you already have it and for those who are using 3.5, you can get the source from http://mef.codeplex.com/ and add a reference to the sources to get the feature sets. I haven’t tried with other versions of .net framework but if you still need to know, post me a comment and I will dig a little more deeper to get you the info.

 

The Solution

We should create and organize out solution first.

Here is the basic process that I followed for this session –

  • Open MS Visual Studio ( I am using 2010 ultimate and will use C# + .net framework 4.0 for this project) and create a Class Library project. I would name this project MEFIntro.Lib.
      I use “.” extended project names to keep the names meaningful and  easy to mange. Its not a restriction
  • Remove the default class file “Class1.cs” from the project as we will not be needing this and we will create our classes ourselves.
  • So far we have created the library project. We also need an executable project that we can use to show the output of the library. So lets add another project named MEFIntro.Desktop and this would be a Windows Forms Application. I also renamed the form to “MainForm“.
  • Add a reference from MEFIntro.Lib to MEFIntro.Desktop.
  • This is how your screen should look like –
  • Now we will need one more project, that we will use to create Extensions. Lets name it MEFIntro.Lib.Ex.
  • Add reference for the MEF library (System.ComponentModel.Composition) to both of your projects – MEFIntrol.Lib and MEFIntro.Lib.Ex.
  • Our final solution will look like this. We have all the projects that we need for this tutorial. If more is needed we will add later

 

The Extensible Core

Lets create the core section of our MEF Project.

By the by, I have downloaded and installed MS Visual Studio 11 Beta and loving it very much. So, from now on all other tutorials and blogs will be based on MS Visual Studio 11 Beta. Hope you will like the IDE too, its loaded with cool staffs….. 🙂

The core engine of our project will be responsible for the following tasks –

  1. Load all plugins and extensions those we will create and keep as plug & play components.
  2. Expose methods that our library will support.
  3. Provide Options and Interfaces for future plugin and extension development.

Okay, enough high level blabbering, lets jump into more low-level description. To support all the features mentions above we need to expose some interfaces, so that our extensions can inherit them and build their own logic in it, more like a strategic pattern. And at run-time our core engine will collect them using MEF and execute as needed.

When I said “interface” in above paragraph, I literally meant Interface of C# programming language. So don’t be afraid, we are inside the codes. Literally!!!

next, we need to expose some methods to be implemented in the implemented class. Our requirement is to collect column list i.e. metadata, from the files. So lets call this method “GetMetaData“. So our core engine interface will look exactly like the following –

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace MEFIntro.Lib.Interface
{
	public interface ICoreEngine
	{
		String[] GetMetaData(String filename);
	}
}

So far, we have expose the interface that our extension methods and classes will use to integrate themselves with the engine. But we haven’t told your engine about the extensions. In other words, our engine still does not know that there could be extensions to use. So lets add 2 interfaces for that-

  1. The top-level interface of the extensions, all extensions targeting our engine will have to implement that interface, otherwise we will not consider as a plugin. I named it – IFileExtension. Our plan here is to use one extension class for one file of files. Like one class for .txt files and another extension class for .xls files. These classes will implement this interface.
  2. The second interface is for holding information about the extension classes. At run-time MEF will search with these data and give us the correct class to use. This idea will be much clear when you see the code in action. For now, lets just put a pin in it. I named this interface – IFileData
Now, we summarize what we have with a screenshot –

I prefer using a single file for all the interfaces that are under one namespace for all my projects, this helps me finding them later on. This is not a requirement. You are free to implement it your way as long as they all are available when needed.

So far we only have the abstract structure of our engine. But we need some concrete ones. Lets, look at our concrete implementation of our concrete core engine. I will describe the codes shortly –

Lets see what we have here –

  1. We have a concrete implementation of our core interface named CoreEngine.
  2. We have overridden the GetMetaData method. For now it is kept blank, we will add implementation logic later.
  3. The class also has some interesting fields, namely –
    IEnumerable<lazy<IFileExtension, IFileData>> operations;
    private CompositionContainer_container;
     This is main container of all the plugins. At run-time the framework will automatically collect our plugins and put them in the operations field. All we have to do is call it in the appropriate manner.
  4. And one other interesting method ComposeParts.
    This method does composition an puts together all the parts. you can understand the codes very easily by looking at it.

So, far we have nothing done with our implementation. Lets complete our core engine and call some extension methods. First we need to add at least one extractor class  for this example. Lets add one for text ( .txt) files. The task is simple just open the files first line, split by the delimiter character and return the string array –

Now since we have one extension that we can call to test our implementation, lets finish the core. First we need to extract the file extension, then check whether we have any extractor defined for this extensions and if we do we will call it and return the value. Our final code for the GetMetaData will look like this –

That’s it, we have the core section complete.

Posted in Managed Extensibility Framework (MEF), Native and Third Party Frameworks for .NET | Tagged , , , | 1 Comment

Chapter 1: Introduction to FluentNHibernate

After working over 2 years with FluentNHibernate, I have now decided I should write a step by step learning schedule for Fluent NHibernate. The reason? … Well lets just say it’s better to wait for it. Once you learn FluentNHibernate, I guess, you will never go back to your old SQL query based development, trust me, you won’t.

When I learned FluentNH, it took me a while to find solutions, as help was not that much available all the time.

But I must be glad to those extraordinary people at stackoverflow (http://stackoverflow.com/ ) who helped me find solutions to problems and also helped me learn FluentNH.

This is an approach of mine, to bring all those thoughts and solutions in one place, so that you can find all without going under the hassle that I went through.

Before beginning with FluentNHibernate, I would like to introduce you with it a bit.

For those, who have worked with FluentNH earlier, you can skip the next section.

FluentNH is an ORM in long Object Relation(al) Mapper. Now, the big question is, What is a ORM? well keep reading….

  1. What is a ORM?
    In simple words, I would say ORM is an advanced technique which is actually originated from OOP ( Object Oriented Programming) concept and is created on the sole purpose to provide developers like us, the capability to perform persistent store management, such as managing a database, with a complete OOP approach.Confused? Okay lets see an example of what I meant. Consider  a very simple case -” Suppose you have 1 table in database name “User” with 2 fields in it username and password.Now, lets say we have to create a single user to the database, what will be a our approach? Okay, lets say we are creating a huge ERP solution, and thus we will use the 3-tier architecture. So we have Presentation Layer, Business Logic and Data Access Components. No matter what we use, somewhere in the DataAccess layer we will need to create a SQL connection and generate a query of some thing like –Select * From <Table> Where <Condition> Order By <field>This is just for one single table and running a query from it. Think about the whole database with hundreds of tables. Well, its better not to think about it, you know what I am talking about. This is where a ORM kicks in.A ORM is the abstraction layer, that gives you the opportunity to run queries just like normal codes in C#. For example in one of your codes you might use something like this –Person p = Repository.Get<Person>(“Id”, <someid>);

    Thats right, no more sql queries and no more table adapters and no more data sets. That singe line will do the rest.

    If you use a ORM you don’t have to type all the sql codes by hand. You create a nice simple repository class and let the ORM handle the test. And frankly speaking, until I knew about ADO.Entity Framework (v3), I never thought any library or ORM will be able to compete with FluentNH.

I think, thats enough for you to just give you a very rough idea of what is a ORM. But until you use it in your projects, I guess it won’t be of much help.

Stay tuned for the next articles on FluentNH.

Posted in FluentNHibernate + NHibernate, Learning FluentNHibernate + NHibernate (FluentNH + NH), Native and Third Party Frameworks for .NET | Tagged | Leave a comment

How to enable Automatic Reference Counting (ARC) in XCode 4.2

Hi,

First of all, Automatic Reference Counting or ARC in the new Apple LLVM 3.0 compiler is a very interesting feature and I found it very useful, cause, I don’t have to worry about my pointers anymore. But the problem I faced is that, I have a lot of projects built on earlier version of XCode and definitely didn’t have the ARC enabled. So I have to manually enable the ARC for those projects. To enable ARC for your old projects, do the following –

  • Open the project in XCode 4.2
  • Make sure you have the ‘Apple LLVM 3.0 compiler’ selected in your build settings under the “Build Options” section. To open the project settings page, first go to the project navigator pane by clicking the folder icon on the toolbar, then select your project. You should see the project settings page, to open all settings use “All” sub-tab under the “Build Settings” tab. Here is a screenshot that can help you out –

    Enabling Apple LLVM Compiler 3.0

  • Now we need to enable the ARC. Scroll down more until you can get the section name “Apple LLVM Compiler 3.0 – Language” section. Now go to the 17th item and set “Yes” for its value. Here is a screenshot to help you –

    Enabling ARC

Hope this was helpful for some of you. If it does, posting a ‘thank you’ does not cost too much time, does it? .. 🙂

Posted in IPhone, Mobile Developement | Tagged , , , | 5 Comments