Wednesday, 6 April 2016

Polymorphic extension methods

The title is an exaggeration in some ways, but in the way the final code reads you would think it is exactly what we have.

Let me set the scene for the problem. I have a problem where I am writing a client library that will be used to call a web service. The web service will return an enumerable collection of objects. These objects will all derive from a common base type (we'll call it BaseType for want of a anything less descriptive) and each type will need to be processed in a different way. I know that the web service will return only three types at the time I write the client library, but in the future more types will be introduced.

The client library will be used by a large number of applications, so to reduce the need to redeploy these applications when the extra types are introduced, the library must be able to dynamically extend its type handling capabilities.

My first thoughts went straight to a loop through the collection, calling a method on each item to handle that item. Basic polymorphism. However, to do this the base type must expose the method and each derived type override it, neither of which I the case.

Next I thought about extension methods. The loop would look the same

 public void HandleValues(IEnumerable<BaseType> values)  
 {  
   foreach (var item in values)  
   {  
     item.Handle();  
   }  
 }  

but the extension methods would differ based upon type.

 public static class ExtensionMethods  
 {   
   public static void Handle(this DerivedType1 item)  
   {  
     //do the work  
   }  
   
   public static void Handle(this DerivedType2 item)  
   {  
     //do the work  
   }  
 }  

This does not work however, the type of item in the loop is BaseType, so no extension method is found.

A big nasty if block or switch statement could be introduced to either cast the item or call the correct handler method. But this is not very extensible, and certainly doesn't satisfy the requirement to be able to extend the set of types that can be handled without cracking open the code, recompiling and redeploying.

That is when dependency injection patterns came to mind. The idea is that an extension method is created to extend BaseType which will dynamically select a handler and call it.

 public static void Handle(this BaseType item)  
 {  
   var handler = DIContainer.Instance.GetType<IHandler>(item.GetType().ToString());  
   handler.Handle(item);  
 }  

and the loop is unchanged

 public void HandleValues(IEnumerable<BaseType> values)  
 {  
   foreach (var item in values)  
   {  
     item.Handle();  
   }  
 }  

The container I chose was MEF, as it allows very simple plugin extension. I define the handlers I a class library,drop this into a folder and simply point the MEF container to that folder to load all handlers. If i want to handle a new type, simply create the handler in a new assembly, drop the assembly into the correct folder, and the next time the app starts it will have the handler.

To load the correct handler requires that each handler is attributed with the type it is to handle. Also, all handlers must either derive from a common type, or implement a common interface, I chose the second

 public interface IHandler  
 {  
   void Handle(BaseType item);  
 }  
   
 public interface IHandler<T>: IHandler where T:BaseType  
 {  
   void Handle(T item);  
 }  
   
 public abstract class BaseHandler<T> : IHandler<T> where T : BaseType  
 {  
   public void Handle(BaseType item)  
   {  
     Handle((T)item);  
   }  
   
   public abstract void Handle(T item);  
 }  

This takes things a little further, the generic interface IHandler<T> has a method that takes an object of the derived type,but this cannot be called directly as we only have an object reference by the BaseType.  As such I created a non generic interface with a method that takes a BaseType as it parameter, then I created an abstract BaseHandler class that implements both the generic andnon generic interfaces.  The implementation of the non generic method casts the item to the derived type,which it knows by virtue of its generic definition, then calls the generic method with the derived type instance. This approach allow me to create concrete handlers that dont need to performany casting and can deal directly with instances of the derived type they are designed to handle.

an example of a handler would be

 public class DerivedType1Handler : BaseHandler<DerivedType1>  
 {  
   public override void Handle(DerivedType1 item)  
   {  
     //do the work  
   }  
 }  

While I have chosen MEF to implement the dynamic loading of the handlers and the selection of these, any number of other DI containers could be used.  The use of MEF allows me to simply decorate the handlers with the type that they are to handle as their exported contract name:

 [Export(contractName:"MyNamespace.DerivedType1", contractType:typeof(IHandler)]  
 public class DerivedType1Handler : BaseHandler<DerivedType1>  
 {  
   public override void Handle(DerivedType1 item)  
   {  
     //do the work  
   }  
 }  

Saturday, 2 April 2016

Composition with MEF

I was working on a project recently where I was asked to write a nice standalone greenfield app.  I thought 'brilliant, I can do this the way I like.  Nice clean code, follow patterns I choose' etc,etc.  Only restrictions were that I should follow the corporate model for things like logging, storage access and the corporate line was that as few third party dependencies should be introduced as possible.

The application was a desktop app,so my choice was to use WPF, follow a MVVM pattern and obviously write the code following the SOLID principles.  Well here came the first challenge.  Dependency injection and inversion of control are things that were not practiced in the business that I was working in.  This mean that I needed to introduce the concept.  Eliminating 3rd party, and most specifically, open source technologies, I was left with a choice of MEF or possible Unity as my DI technology (or IOC container if you prefer to think of it that way).

Now whilst I had used Unity in the past, I had not fond it the easiest to work with, but with the restrictions listed above it left me with only MEF as an alternative.  Many moons ago I had investigated MEF,and I liked the concept, but at the time, having only used it in practice tutorials and never in a production system (we all remember the calculator example that was around when MEF first came to prominenece) I wasnt sure it was the best choice.  Despite my reservations, I was sat next to a MEF evangelist, and I was convinced it was the way to go.  Little did I know at that stage that it would prove an inspired choice.

Soon I came to my first real challenge that MEF came to rescue me from.  I mentioned above the corporate model for logging,this involved simply using a library written in-house that wraps log4net and adds some custom logic before logging.  The pattern adopted to perform this was to instantiate a static member in each class of the system to perform any logging required by that class:



The problem was that I didnt want to instantiate it directly in this way.  I wanted to compose each object using DI.  I wanted to be able to unit test all classes without the need for concrete implementations of the dependencies of the class.  That meant to me that I needed to only have dependencies of  class that were defined by an interface.  A simple contract between the objects.  Assumptions would be made in the unit tests that all logic external to the class under test was flawless, and as such could be mocked from the interface alone using any given mocking framework.

So the question was how do I do this when the logging library does not expose an interface, and furthermore the constructor of the logging class requires the type of the object that is to contain it.  The first thing to do was extract an interface from the logging class.  By luck there was in fact only one such class, so the extraction of an interface was very straightforward.  And the organisation had adopted a policy of creating nuget packages for any shared libraries used in the organisation, so use of this new library wold be adopted for any project undergoing development.

Next came the problem of injecting the logging object.  How could I inject an instance of the logger class to the constructor of an object and know the type of the object in advance to construct the logger?  I could configure my DI container with n logger instances, each constructed with a given type as its constructor argument for the n types I have in my system.  This seemed stupid and a maintenance nightmare.  I would need to change my DI container setup for each class I wrote.  I cold create a new constructor for the logger class, a default one, and pass the type in via a method or property.  But this wold involve the developer always remembering to perform this extra step of setting the type on the logger class. An unnecessary additional step that wold invariably be missed and lead to countless bugs.

In steps MEF with it great range of possibilities,  Not only does it allow you to inject objects via a constructor (or to a property, although I am not a fan of this.  I feel if a class needs another object, this should be clearly conveyed to the developer by appearing in the constructor argument list.  I prefer the external dependencies of my classes to be immutable.  Property injection to me seems to result in a class that will do something that is not at all clear to the developer, almost like the person writing the class wants to hide something.  As a side note it can be used to be round circular construction dependencies, but to me if you have such a circular dependency your design is wrong and you need to rethink your architecture at that low class level.),and back to my point, it allows you to inject anything to the constructor.  You can inject a simple value type.

You can inject a delegate, and that is where the solution to my problem came from.  I chose to inject a function that takes a type as a parameter and returns an object in the form of a reference to the new logging interface:




But how would I set up my MEF container to have such an entity (for want of a better term)?  WellI created a static class in the logging library that exports the function.  This wold be picked up by the population of the MEF container, and injected into any class the requires it.

This is nothing revolutionary in terms of how to construct an object.  It is simply the factory pattern, here the factory is the function.  But in terms of injecting an object that needs to know something about the target of the injection I feel it is very neat.  If you want to inject a different implementation of the function, you could, if you want to introduce a new implementation of the logging interface this would simply involve changing the factory function.  Clearly I have written this code in the simplest way,and to make it more extensible I might choose to pass more parameters to the factory function, which by changing it wold break any current use of it, however I don't foresee a need for this any time soon, and I believe in only building for what the requirements need now. I don't try to cater for every 'what if' that might rear up in the future as I could never think of them all, never mind code for them all.

Thursday, 24 March 2016

Writing quality code good for the soul

I recently found myself without paid employment. Not a great situation at first thought, but it actually turned out to be a blessing. Having been made redundant I was forced to think about what I really want to do.

I enjoy writing code and solving problems. Making things, you might say coding is my outlet for my creative side given my lack of artistic talent. The thing is I had spent years in the corporate machine, writing the code I was told to write, in the style I was told to, to do things that other people said it needed to do, the way they said it should do it. The lack of creative input was stifling. I wanted control. Don't we all. Without my own software house how could I do that? Well I couldn't. Not entirely, but I did have the power to only accept a job that felt right.

I know from experience that what you expect of a job from the advert and interview is seldom the reality. This left me with the quandary of how to choose a role that would fulfill my needs. I couldn't guarantee that I would choose a role that delivered what it promised in reality, so the best decision was to not tie myself to a single role long term. As a result I have become a contract software developer.

This leads me to my current role, one that on the face of it doesn't tick many of the boxes I was looking for. It's a role advertised as vb.net with vb6 developer, which after more investigation looked to be a job of migrating old applications that only run in Windows XP to work on Windows 8. Not a shiny sexy role, but a safe role. At the interview however things started to look better. Yes the legacy apps are vb.net (1.1 and 2.0) and vb6, but the technical vision was heading towards C# and .net 4.x. Also it was clear that the dev team, with the architect involved in the interview, had a good amount of control of the technical direction. It felt like the devs were valued as experts in their craft and opinions mattered.

Due to business reasons the technology was a little mired in the past, but the appetite to move forward was clear amongst the devs.

After a couple of months doing the necessary, taking a legacy app up to .net 4, and making the required changes to keep the app working in XP and now also in win8, I have the opportunity to work on a 'greenfield' project. It isn't a brand new app, rather a rewrite of an existing one, one that had bugs and required significant changes for the 'migration', so the decision to rewrite made sense.

I have been given almost full control of technical decisions for the app, as a result I am writing it using WPF, with a MVVM structure. The control of technical decisions has also allowed me to develop the code in a fully TDD manner, with near 100% unit test coverage using mocking through Moq to isolate classes for testing. The code I have written adheres to the SOLID principles to an extent no code I have written before did. The system is built up using dependency injection, and the code coupling is minimal. All in all its a pleasure to work with.

The thing I had in mind when I envisaged this post was that I now feel reinvigorated about coding. No longer is it a slog. When I add functionality, it goes in cleanly without making the existing code messy. When I refactor some sections I do it with confidence. I actually look forward to touching this codebase. It's amazing what clean code can mean, and I know better now than ever before why many clean coding evangelists preach so much about the pros of clean code. Its not just about a functional, maintainable system. Its about a functional, happy development team.