With the bloom of micro-service architectures in recent days, a demand in being able to test the integration of those micro-services has also become a lot relevant and a challenge as well. I was also facing a problem while doing some integration tests with my services and tried to come up with an automated testing strategy for my code base written in c#.

Background

A few months ago I released a open source library that contains a simple repository.

You can find the repository here – Atl.GenericRepository

One of the features I was adding in recent days was a test case that demonstrates the way to connect to a PostgreSQL database. Since this does not require any special changes to the repository, I wanted to add it as an integration test. I decided to go with docker. But I wanted the make it automated, so in short –

  • I should be able to spawn a postgres docker when running the test
  • Run the test cases
  • Stop the container when the test cases are finished
  • Remove the container

Introducing Microsoft Docket.DotNet

While looking for a wrapper for docker that can spawn and remove container I came across this open source library from MS and well, this was exactly what I was looking for. I will be using this library to create, spawn and stop my container for my coded integration test. This gives me options to –

  • Run the container from C# code
  • Connect to the container
  • Stop/Remove the container from C# code.

The library is available here –
https://github.com/Microsoft/Docker.DotNet

Start PostgreSQL Container with C# Code (Microsoft Docker.DotNet)

Ok, lets begin, we will down a postgres image and spawn a container with the image.

Prerequisite – A Docker installation

Docker.DotNet connects to your local docker installation pretty easily. I am using a windows machine so I had docker for windows installed on my machine.

Install the nuget package

Install the nuget package Docker.Dotnet to get started –

install-package Docker.DotNet

or,

dotnet add package Docker.DotNet

Connect to docker

The following code base connects to the local docker (docker for window for me) and creates a client to interact with it –

var client = new DockerClientConfiguration(new Uri("npipe://./pipe/docker_engine")).CreateClient();

Run container for PostgreSQL

The strategy –

  • I am using is that I am going to map a random port (10000 – 12000) on host machine to port 5432 on the docker container. That way I will be able to connect to the docker container as if this postgresql database is running on the host machine. This makes it easier to connect.
  • Since I am only running for a integration test that works on volatile data, I am not worried about mapping a volume or mounting a data file to pre-load data into the database. But if you want you can surely do that with appropriate commands.

Now, we have got out docker wrapper/client, lets spawn container. We will first look for a suitable image. Then we will create a container for it and finally will start the container. I have created a method to do all this task, lets call it GetContainer -


private async Task<(CreateContainerResponse, string)> GetContainer(DockerClient client, string image, string tag)
{
    var hostPort =  new Random((int) DateTime.UtcNow.Ticks).Next(10000, 12000);
    //look for image
    var images = await client.Images.ListImagesAsync(new ImagesListParameters()
    {
        MatchName = $"{image}:{tag}",
    }, CancellationToken.None);

    //check if container exists
    var pgImage = images.FirstOrDefault();
    if (pgImage == null)
        throw new Exception($"Docker image for {image}:{tag} not found.");

    //create container from image
    var container = await client.Containers.CreateContainerAsync(new CreateContainerParameters()
    {
        User = "postgres",
        Env = new List<string>()
        {
            "POSTGRES_PASSWORD=password",
            "POSTGRES_DB=repotest",
            "POSTGRES_USER=postgres"
        },
        ExposedPorts = new Dictionary<string, EmptyStruct>()
        {
            ["5432"] = new EmptyStruct()
        },
        HostConfig = new HostConfig()
        {
            PortBindings = new Dictionary<string, IList<PortBinding>>()
            {
                ["5432"] = new List<PortBinding>()
                    {new PortBinding() {HostIP = "0.0.0.0", HostPort = $"{hostPort}"}}
            }
        },
        Image = $"{image}:{tag}",
    }, CancellationToken.None);

    if (!await client.Containers.StartContainerAsync(container.ID, new ContainerStartParameters()
    {
        DetachKeys = $"d={image}"
    }, CancellationToken.None))
    {
        throw new Exception($"Could not start container: {container.ID}");
    }

    var count = 10;
    Thread.Sleep(5000);
    var containerStat = await client.Containers.InspectContainerAsync(container.ID, CancellationToken.None);
    while (!containerStat.State.Running && count-- > 0)
    {
        Thread.Sleep(1000);
        containerStat = await client.Containers.InspectContainerAsync(container.ID, CancellationToken.None);
    }

    return (container, $"{hostPort}");
}

Well, that is a long method. Lets try to understand what it does –

var hostPort =  new Random((int) DateTime.UtcNow.Ticks).Next(10000, 12000);

The purpose of this block is randomizing a port to bind with the postgresql 5432 port. The reason for is to minimizing the probability of choosing a port which could already be blocked in the host machine.

Next, we look for a image, (I am using postgress:10.7-alpine) , so I passed in image=postgres and tag=10.7-alpine

var images = await client.Images.ListImagesAsync(new ImagesListParameters()
            {
                MatchName = $"{image}:{tag}",
            }, CancellationToken.None);

            //check if container exists
            var pgImage = images.FirstOrDefault();
            if (pgImage == null)
                throw new Exception($"Docker image for {image}:{tag} not found.");

Assuming we got out image, we then crate the container –

//create container from image
var container = await client.Containers.CreateContainerAsync(new CreateContainerParameters()
{
       User = "postgres",
       Env = new List<string>()
       {
            "POSTGRES_PASSWORD=password",
            "POSTGRES_DB=repotest",
            "POSTGRES_USER=postgres"
       },
       ExposedPorts = new Dictionary<string, EmptyStruct>()
       {
             ["5432"] = new EmptyStruct()
       },
       HostConfig = new HostConfig()
       {
           PortBindings = new Dictionary<string, IList<PortBinding>>()
           {
                 ["5432"] = new List<PortBinding>()
                 {new PortBinding() {HostIP = "0.0.0.0", HostPort = $"{hostPort}"}}
           }
      },
      Image = $"{image}:{tag}",
}, CancellationToken.None);

Well, that is pretty straight forward. I am supplying all the details needed to run a postgresql server in a docker container and mapping a host port to its 5432 port.

Lets start the container –

if (!await client.Containers.StartContainerAsync(container.ID, new ContainerStartParameters()
{
     DetachKeys = $"d={image}"
}, CancellationToken.None))
{
     throw new Exception($"Could not start container: {container.ID}");
}

Using the container ID, that we got it from the previous step, we start the container. Now, this call is a bit tricky, for some reason it returns before even the container is fully initialized. So, in the next code block we poll the container to make sure it is initialized and running –


var count = 10;
Thread.Sleep(5000);
var containerStat = await client.Containers.InspectContainerAsync(container.ID, CancellationToken.None);
while (!containerStat.State.Running && count-- > 0)
{
      Thread.Sleep(1000);
      containerStat = await client.Containers.InspectContainerAsync(container.ID, CancellationToken.None);
}

Once initialized lets save the container ID to dispose when the test is completed.

Connect to the Database

Since I am proxying a host port to the postgresql database, connecting to it pretty straightforward. Here my connection string –

$"User ID=postgres;Password=password;Server=127.0.0.1;Port={_port};Database=repotest;Integrated Security=true;Pooling=false;CommandTimeout=3000"

See it in Action

Well, lets first up the test cases and see if it works or not. Here are some screenshots.

I put a break point just after the container is started, and voila, you can see a container running –

Cleanup

Then when integration tests are completed, you stop the container and remove it.

if (await _client.Containers.StopContainerAsync(_containerResponse.ID, new ContainerStopParameters(), CancellationToken.None))
            {
                //delete container
                await _client.Containers.RemoveContainerAsync(_containerResponse.ID, new ContainerRemoveParameters(), CancellationToken.None);
            }

At the end of the test, everything is success and our container is cleaned and removed –

The full test class is available at the repository of Atl.Repository.Standard, here –

https://github.com/Activehigh/Atl.GenericRepository/blob/master/Atl.Repository.Standard.Tests/Repositories/ReadRepositoryTestsWithNpgsql.cs

Advertisements

Hello! Today I am going to propose a software architecture that supports unlimited number of plugable components. I am introducing the BeeHive architecture. Please feel free to provide your feedback and point out the problems. I would very much appreciate that.

Since I work mostly as a web application developer, I will describe the architecture pattern in terms of web application development, but the pattern can be well suited for desktop software development as well.

Motivation

I consider myself lucky, because within my short career span I have had the opportunity  to work with some great architects, had the opportunity to learn about some good architectures and also had the opportunity to design multiple small to medium scale systems from scratch.

The systems that I developed needed to be extensible and also needed to be maintainable and scalable at the same time. So, in almost all cases I used a hybrid architecture pattern.

  • For the extensibility I really liked Microkernel Architecture, which gives you the option to extend your solution and inject libraries at runtime.
  • For all the benefits it can provide, The Onion Architecture or The Clean Architecture.
  • And if the system becomes too big to manage it can adapt and effectively transform part of it to be a Microservice Architecture.

Funny though, I came to know the name of The Onion Architecture only recently.

So, eventually the pattern I used is a high level Microkernel with a low level Onion. It has a core components that loads one or more plugins while bootstraping and each of those component is built using onion architecture design principles.

Only now, I just came up with a fancy name ūüôā The BeeHive.

Where the name BeeHive comes from?

Even though it is a hybrid architecture, I prefer the name BeeHive because of how it behaves. Lets have a look a  Bee Hive at first –

  • A BeeHive is a swarm of bees, each of them has specific responsibilities.
  • Even though all of the bees are individuals, there is a clean work load distribution among them.
  • Some bees are more important than other bees.
  • A Queen Bee is the center for all and the hive cannot survive without her.
  • But being the center does not mean every communication has to go through the queen. Bees can communicate with each other with special signals. This reduces the everyday work load for the Queen itself.
  • If the Queen wants to banish an individual bee from the hive, the bee has nothing to do, but to leave.
  • If, one or many bees die, the hive still survives and continues to perform as if nothing happened and appoints new bees  to perform the same task. If, however, no suitable bee is found to take on the responsibility, the hive suffers.
  • If the hive, becomes too big, a new Queen bee arises and a new hive is formed somewhere else.

Well, there could be more, but you get the point. Now, we convert this to a feature of a software architectural pattern in the next section.

The BeeHive Architecture

Before we dive into details, I will summarize the targets that BeeHive tries to achieve –

This may sound like a marketing speech. I gotta sell it you know!

  • Development Friendly
    When I first tried to design my first ever system, the main focus point for me was always to be development friendly. A solution or system should be easy and small enough to kick it off within a few weeks but also will support ongoing development and be extensible in such a way that no matter what happens, it should be adaptive of future design decision. A architecture that is made to evolve over time.
  • Plug & Play
    Well, we did really hated the PS/2 mouse back in the old days, when we had to restart the whole machine, because we just plugged in a new mouse! Don’t we love the USB mouse and the point that it can be plugged in and works, just like that!Well, a large system should be like that, isn’t it?
    Why do we have to deploy the hole solution just to add a new feature to it? or just to add a new payment gateway?
    What it we want to turn off some features? I know, I know, micro-services. But what it we don’t have a big enough system or more importantly budget to support micro-services?
  • On Demand Micro-services
    Wouldn’t it be better to have a system that can be converted to micro-services as we scale out and support the business get big? BeeHive is built almost exclusively to support that.

These are the few of the benefits that we an achieve with BeeHive. If you are reading this, you are a programmer or someone who understands how things work. You will see the benefits.

Lets get into the design.

The Design

Well, here is how it looks like on a very simple diagram –

BeeHive 3

As you would have guessed, yes it looks like a bee hive. For easy understanding and explanation, instead off considering bee’s, I am going to use hexagon, similar to a beehive cell.

  1. The Gray Hexagon at the center, is the queen or for us the core. The core hexagon has the following characteristics –
    1. Does not implement any concrete service or classes unless that is absolutely necessary for the purpose of bootstrapping or is a common component shared by one or more standard cell (the green ones). i.e. plugins.
    2. Is responsible for loading all other plugable components (plugins)
    3. Provides a medium for communication between the plugins.
    4. Is capable of turning on or off – any component, anytime. We will call it The CORE.
  2. The Green Hexagons,
    1. Each of these Hexagons are a concrete implementation of services.
    2. Each itself is implemented using the Onion Architecture and has multiple layers of implementations.
    3. They are independent of one another and any single one does not even know about the existence of others. This is a very important constraint for BeeHive.

The Evolution

BeeHive is always ready to evolve, from the day it is created. If our monolith looks big enough we can easily break it down to multiple micro-services and it looks something like this –

Here, we have copied the core and move 2 individual modules together and formed a second hive which is completely separate from the first one. An now each individual hive can grow itself and again if we have too big, we can deconstruct again. Since, each concrete modules never knew about the existence of other modules, there is not need to change anything else. May be some proxies to connect these services and that is all. We will see them when we see an example.

In the following post we will see it in action.

If you haven’t already read the Basic Setup guide please read it here. As this guide requires knowledge from the previous article.

https://brainlesscoder.com/2019/03/18/atl-repository-standard-basic-usage/

In the previous article we saw how to setup the Domain classes, the System clock, the DB Context factory generator, the Id Generator and Domain injector and to use them to create the repository and use the repository.

This is a pretty small article, as we don’t have to do a lot of this here. All we need to do is inject the components into one of your favorite DI container and resolve the repository to use it.

In this example I am going to use the default service provider for ASP.Net Core. You can find it at nuget package Microsoft.Extensions.DependencyInjection. Lets install it first –

Install-Package Microsoft.Extensions.DependencyInjection

//for dotnet use

dotnet add package Microsoft.Extensions.DependencyInjection

And lets create our service collection –


var serviceCollection = new ServiceCollection();

And inject the required service for the repository –

//register repository

serviceCollection.AddTransient<IDomainContextFactory<DatabaseContext>, DomainContextFactory>();
serviceCollection.AddTransient<IDomainInjector, DomainInjector>();
serviceCollection.AddTransient<IKeyGenerator<long>, IdentityKeyGenerator>();
serviceCollection.AddTransient<IConfigurationProvider, ConfigProvider>();
serviceCollection.AddTransient<ISystemClock, DefaultSystemClock>();
serviceCollection.AddTransient(typeof(IGenericRepository<>), typeof(Repository<>));

...
...

//build the service provider from service collection
ServiceProvider = serviceCollection.BuildServiceProvider();

Here please note that I am using the default provided Databasecontext class, but if you want you can also subclass it and provide that class instead, in case you need some special code to run for the context.

You can either use Scoped or Transient lifestyle. There is not dependency on any specific lifestyle.

Then inject the repository to you service and use them right away .. ūüôā


public class OrderService : IOrderService
{
        private readonly IGenericRepository<long> _repository;
        public OrderService(IGenericRepository<long> repository, IMapper mapper, ICacheStore cacheStore)
        {
            _repository = repository;
        }

        ....
            _repository.GetAll<...>().Where(x => x.... == ...).OrderBy(x => x....).ToList();

The most common problem that I faced while building any system, no matter how complex or simple it can be, is settings up the data reader and writer for the data store. It does not really matter what you are building, as long you need a data store, you are probably going to face the same following problems –

  1. Most of the features for a data store are same. It is a store, saves data and returns data when queried.
  2. Its redundant but yet, not exactly duplicate. Each system requires slightly different features. For example, one system might require a repository/store that has multi-tenancy support out of the box and thus isolates data as if each tenant’s context is a sandbox, yet, another system might want to do the same, based on the Organization and some other might require the same partition per user.
  3. Supports Inversion of Control.
  4. Supports Parallelism and Concurrency.
  5. Has option to manipulate unit or work patterns, for special cases., etc.

The list goes on ….

In my library Atl.Repository.Standard, I have tried to solve as much problems as possible and will gradually modify it to do what we want it to do.

At the moment of this writing, Atl.Repository.Standard only support basic CRUD operations, i.e. CREATE, READ, UPDATE and DELETE operations. Supports IOC by default.

Basic Uses

Lets setup our POCO object first. These is our domain class –


public abstract class BaseDomain
{
  public virtual int Id { get; set; }
  public virtual bool IsActive { get; set; }
  public virtual bool IsDeleted { get; set; }
  public virtual bool IsLocked { get; set; }
  public virtual bool IsArchived { get; set; }
  public virtual bool IsSuspended { get; set; }
  public virtual DateTime? CreatedAt { get; set; }
  public virtual DateTime? UpdatedAt { get; set; }
}

public class Tenant : BaseDomain, IDomain<int>
{
}

For demonstration, I will only use one domain class Tenant for now.

Now, lets install the nuget package https://www.nuget.org/packages/Atl.Repository.Standard/ to setup repository and other classes

Install-Package Atl.Repository.Standard

Once, installed we need to set up the following classes for the repository to function properly –

An Id Generator

The purpose of the id generator is to give an option if you would like use a custom generator which is not database generator. The signature of the id generator looks like –


public interface IKeyGenerator<TKey>
{
    TKey Generate(IDomain<TKey> obj);
    bool DoesRequireNewKey(TKey currentKey);

    Expression<Func<TDomain, bool>> Equal<TDomain>(Expression<Func<TDomain, TKey>> left, TKey right)
            where TDomain : IDomain<TKey>;
}

If you are happy with a Guid or intid, the library comes with default implementation of both GuidKeyGenerator and IdentityKeyGenerator. You can just use them right away.

A Domain Injector

One of the strong feature of the library is the DomainInjector. Instead depending on the DatabaseContext class itself and defining and referring all domains before hand, you can load assemblies on the fly and inject domains with this approach.

This class will inject domains when the repository is created.


public class DomainInjector : IDomainInjector
{
  public void InjectDomain(ModelBuilder modelBuilder)
  {
    modelBuilder.Entity<Tenant>();
  }
}

A System Clock

Atl.Repository.Standard works with a custom clock provider. This gives you options to do TDD in a lot easier way.

As usual, comes with its default implementation of ISystemClock you can just use it right away.


var clock = new DefaultSystemClock()

A Configuration Provider

Same as the Domain Injector, this configuration provider supports DI and can be injected easily. Provides configuration and connectionstring to connect to database.

For example, if we need to use a SQLite database we use –


public class TestConfigurationProvier : IConfigurationProvider
{
  public string ConnectionString => "";
  public DbContextOptionsBuilder ApplyDatabaseBuilderOptions(DbContextOptionsBuilder optionsBuilder)
  {
    var contectionStringBuilder = new SqliteConnectionStringBuilder { DataSource = "TestDatabase.db" };
    return optionsBuilder.UseSqlite(contectionStringBuilder.ToString());
  }
}

A Database Context Factory

The purpose of the context factory is to provide a context scope. Put it all together and create a context factory now –


var configurationProvider = new TestConfigurationProvier();
var domainInjector = new DomainInjector();
ContextFactory = new DomainContextFactory(new List<IDomainInjector>() { domainInjector }, configurationProvider);

Now, now we are ready to create our repository and use it –

var repository = new Repository<int>(idGenerator, ContextFactory, new DefaultSystemClock(), null);

Congratulations! Your repository is ready to be used.

Examples

Saving and loading items are as simple as –

//saving && updating
var tenant = new Tenant();
repository.Add(tenant);
repository.Update(tenant);

//loading
var tenants = _repo.GetAll<Tenant>().ToList();

Please keep in mind that, the repository returns IQueryable for GetAll method. So unless you call ToList() it will not execute the method on the data store. This gives you great option to leverage built in features from EF core, such as Include. You can fetch related entries just like you would do for normal EF context . An example Organization model that looks like this –


public class Organization : BaseDomain
{
     public Tenant Tenant { get; set; }
     public int TenantId { get; set; }
}

might easily include the referenced model Tenant


var tenants = _repo.GetAll().Include(x => x.Tenant).ToList();

like this or perform a where clause like –

var tenants = _repo.GetAll().Include(x => x.Tenant).Where(x => x.Tenant.Name == "Some Tenant").ToList();

Happy Coding.

In recent days, in one of my projects, we found an innocently looking bug that was a little too overweight – sometimes weighing more than 6GB in size in a 8GB web server. Well, a memory leak that caused the application pool to crash by exhausting the system memory. So I thought that, why not share it here so that if anyone else, facing the same problem, can get some insight. Here goes the plot –

The main goal of the project was to build a modular framework to be used to build robust web applications easily. For that reason the most obvious choice was to use Inversion of Control Pattern.

If you are already here then it is highly normal that you are familiar with the IOC( Inversion of Control) pattern already, but if you are not, then here is a good article for you.

As you all know, castle windsor is a widely used service locator library we used it for our IOC implementation. Therefore, we registered our services like this –

_container.AddComponentLifeStyle<IDocumentService, DocumentService>(LifestyleType.Transient);

If you are new to castle windsor and the life cycle of the items it resolves, grab its details from here.

In short,¬†LifestyleType.Transient means each time you ask castle windsor to resolve or give you an instance of an interface it will create a new one each time. But the default implementation of castle windsor with transient life cycle does the following –

  • It creates a new instance each time ask for
  • Keeps a reference of that instance created, so that it can refer it later
  • Assumes that you will release the instance manually, again remember it, manually.

The first two options is not a problem, but the third one is a possible leak. If in any case you do not release the instance manually its a never-ending memory leak. Because the default garbage collector of CLR will never clear it since castle windsor is holding a reference and castle will never clear it because it thinks you will clear it yourself. ¬†That’s what we exactly did and well it took only 2 hours to consume all the server memory (6GB +) and crash it.

If you are interested about the possible memory management scenarios, see this article here, I realized our problem reading this one.

Well, now comes the big question – Whats the easiest solution?

Well you can release all the instance manually that you resolved using castle windsor or you can grab the solution mention here.

I will save your time. Just in the castle service locator where you are creating the kernel, add the following lines –

_container.Kernel.ReleasePolicy = new NoTrackingReleasePolicy();

Basically what it does is that, it prevents castle windsor from keeping any references of resolved instances and thus when you are done with your code and the object needs releasing the default GC collects it and the memory is freed. Which removed the memory leak problem. And you know what now the memory consumption never goes over 600MB. ūüôā

It’s highly usual that you are using NHibernate with castle windsor and you think NHibernate is causing the leak?.. well don’t be harsh on NHibernate ;¬†its castle and your¬†unreleased instances, who is causing the leak. ūüėÄ

Introduction

Before I begin lets discuss about a common development scenario and how we might deal with it –

Suppose we have a requirement as follows (This was a real-time scenario once for me )-

  1. The system will collect table data from flat files,  like Рplain text (txt).
  2. The task is to collect column information and show the list of columns in a Grid.

Now, lets consider we have started our development process and also completed it and delivered. Your client is happy and so are you.

Now, comes the tricky part. A few days later your happy client asks –

 could you please add an option so that we can also parse Excel WorkSheets, such as xls, xlsx, xlsm, xlsb files or may be delimited CSV files?

If you never anticipated this would happen, I assume you didn’t keep an option to add a parser later. Now what should you do? You will open the source again and modify it and them again build and deliver it. For a small project like this it is still troublesome, there are other projects with millions of lines of codes. What will you do with them? Open the source again and build?

Oh Boy, ain’t that tiresome!!!

Lets, talk about another approach –

Instead of hard coding the parser in the application could we could have done the following –

  1. Create a Plugin directory and keep a parser as a portable assembly (.dll) which will have a generic interface to expose methods.
  2. When executed the program will search for available plugins in that folder and will see that it has Text (.txt) file parser.
  3. Then the program will issue commands or all methods based on exposed interface of the parser.

Now, in this situation even if my client wants more parser, we can just build a new one implementing the base interface and build it as a class library and finally just put it in the Plugin folder and the parse will work out of the box.

Now, I think you have got a rough idea of what I am talking about. So lets, just mention it in words –

Using Managed Extensibility Framework(a.k.a MEF) we can do the above mentioned thing in minutes. In other words, we can inject business logic right into the application and will not even need rebuild it.

Before we start coding, you might wanna know where to get it. If you are using .net framework 4.0 you already have it and for those who are using 3.5, you can get the source from¬†http://mef.codeplex.com/¬†and add a reference to the sources to get the feature sets. I haven’t tried with other versions of .net framework but if you still need to know, post me a comment and I will dig a little more deeper to get you the info.

 

The Solution

We should create and organize out solution first.

Here is the basic process that I followed for this session –

  • Open MS Visual Studio ( I am using 2010 ultimate and will use C# + .net framework 4.0 for this project) and create a¬†Class¬†Library¬†project. I would name this project¬†MEFIntro.Lib.
      I use “.” extended project names to keep the names meaningful and ¬†easy to mange. Its not a restriction
  • Remove the default class file “Class1.cs” from the project as we will not be needing this and we will create our classes ourselves.
  • So far we have created the library project. We also need an executable project that we can use to show the output of the library. So lets add another project named¬†MEFIntro.Desktop¬†and this would be a¬†Windows Forms Application.¬†I also renamed the form to “MainForm“.
  • Add a reference from MEFIntro.Lib to MEFIntro.Desktop.
  • This is how your screen should look like –
  • Now we will need one more project, that we will use to create Extensions. Lets name it MEFIntro.Lib.Ex.
  • Add reference for the MEF library (System.ComponentModel.Composition) to both of your projects – MEFIntrol.Lib and MEFIntro.Lib.Ex.
  • Our final solution will look like this. We have all the projects that we need for this tutorial. If more is needed we will add later

 

The Extensible Core

Lets create the core section of our MEF Project.

By the by, I have downloaded and installed MS Visual Studio 11 Beta and loving it very much. So, from now on all other tutorials and blogs will be based on MS Visual Studio 11 Beta. Hope you will like the IDE too, its loaded with cool staffs….. ūüôā

The core engine of our project will be responsible for the following tasks –

  1. Load all plugins and extensions those we will create and keep as plug & play components.
  2. Expose methods that our library will support.
  3. Provide Options and Interfaces for future plugin and extension development.

Okay, enough high level blabbering, lets jump into more low-level description. To support all the features mentions above we need to expose some interfaces, so that our extensions can inherit them and build their own logic in it, more like a strategic pattern. And at run-time our core engine will collect them using MEF and execute as needed.

When I said “interface” in above paragraph, I literally meant Interface of C# programming language. So don’t be afraid, we are inside the codes. Literally!!!

next, we need to expose some methods to be implemented in the implemented class. Our requirement is to collect column list i.e. metadata, from the files. So lets call this method “GetMetaData“. So our core engine interface will look exactly like the following –

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace MEFIntro.Lib.Interface
{
	public interface ICoreEngine
	{
		String[] GetMetaData(String filename);
	}
}

So far, we have expose the interface that our extension methods and classes will use to integrate themselves with the engine. But we haven’t told your engine about the extensions. In other words, our engine still does not know that there could be extensions to use. So lets add 2 interfaces for that-

  1. The top-level interface of the extensions, all extensions targeting our engine will have to implement that interface, otherwise we will not consider as a plugin. I named it – IFileExtension. Our plan here is to use one extension class for one file of files. Like one class for .txt files and another extension class for .xls files. These classes will implement this interface.
  2. The second interface is for holding information about the extension classes. At run-time MEF will search with these data and give us the correct class to use. This idea will be much clear when you see the code in action. For now, lets just put a pin in it. I named this interface – IFileData
Now, we summarize what we have¬†with a screenshot –

I prefer using a single file for all the interfaces that are under one namespace for all my projects, this helps me finding them later on. This is not a requirement. You are free to implement it your way as long as they all are available when needed.

So far we only have the abstract structure of our engine. But we need some concrete ones. Lets, look at our concrete implementation of our concrete core engine. I will describe the codes shortly –

Lets see what we have here –

  1. We have a concrete implementation of our core interface named CoreEngine.
  2. We have overridden the GetMetaData method. For now it is kept blank, we will add implementation logic later.
  3. The class also has some interesting fields, namely –
    IEnumerable<lazy<IFileExtension, IFileData>> operations;
    private CompositionContainer_container;
     This is main container of all the plugins. At run-time the framework will automatically collect our plugins and put them in the operations field. All we have to do is call it in the appropriate manner.
  4. And one other interesting method ComposeParts.
    This method does composition an puts together all the parts. you can understand the codes very easily by looking at it.

So, far we have nothing done with our implementation. Lets complete our core engine and call some extension methods. First we need to add at least one extractor class ¬†for this example. Lets add one for text ( .txt) files. The task is simple just open the files first line, split by the delimiter character and return the string array –

Now since we have one extension that we can call to test our implementation, lets finish the core. First we need to extract the file extension, then check whether we have any extractor defined for this extensions and if we do we will call it and return the value. Our final code for the GetMetaData will look like this –

That’s it, we have the core section complete.

Hi,

First of all, Automatic Reference Counting or ARC in the new Apple LLVM 3.0 compiler is a very interesting feature and I found it very useful, cause, I don’t have to worry about my pointers anymore. But the problem I faced is that, I have a lot of projects built on earlier version of XCode and definitely didn’t have the ARC enabled. So I have to manually enable the ARC for those projects. To enable ARC for your old projects, do the following –

  • Open the project in XCode 4.2
  • Make sure you have the ‘Apple LLVM 3.0 compiler’ selected in your build settings under the “Build Options” section. To open the project settings page, first go to the project navigator pane by clicking the folder icon on the toolbar, then select your project. You should see the project settings page, to open all settings use “All” sub-tab under the “Build Settings” tab. Here is a screenshot that can help you out –

    Enabling Apple LLVM Compiler 3.0

  • Now we need to enable the ARC. Scroll down more until you can get the section name “Apple LLVM Compiler 3.0 – Language” section. Now go to the 17th item and set “Yes” for its value. Here is a screenshot to help you –

    Enabling ARC

Hope this was helpful for some of you. If it does, posting a ‘thank you’ does not cost too much time, does it? .. ūüôā