Automated integration testing to me means being able to run tests against a system that test the way some or all of the parts integrate to make the whole, fulfilling the requirements of the system, in a way that can be repeated without manual intervention. This may be as part of a continuous integration type build process, or simply a human initiating the test run.
In order to perform this task, the tests must be able to be run in any order, as any subset of the entire suite, and in parallel with no implications on the result of any test.
For the purposes of automated integration testing, I am an advocate of a BDD style approach using a test definition syntax such as that of cucumber, and I will explore a case where I have worked on this approach to integration testing only to be bitten by some inadequacies of the way it was implemented. I will begin by saying that the choice of using cucumber style tests for this project was not necessarily wrong, but greater pre-planning of the approach was needed due to some specific technical issues.
System under Test
To give some background around the system that we were building, it consisted of a small tool that ran a series of stored procedures on a database, these output data in xml format. This data was then transformed using a xslt, and the result was uploaded to an API endpoint. The scope of the testing was the stored procedures and xslts, so ensuring that the data extracted and transformed conformed to the schema required by the API and that it was comprised of the data expected from the database being used.
The database itself was the back end of a large, mature system, the structure and population of which was the responsibility of another team. Additionally the system that populates this database has a very thick and rich business layer, containing some very complex rules around the coupling within the data model which are not represented by the simple data integrity rules of the data model itself. The data model is held only as a series of sql scripts (original creation plus a plethora of update scripts) and as such proves difficult to integrate into a new system.
During the day to day use of the existing, mature system the data is changed in various ways, and it is the job of the new system to periodically grab a subset of this data and upload it to the API using an incremental change model. So the tests are required to validate that the stored procedures can run for a first time to get all data, and after some modification to get the incremental changes.
How Integration Tests were Set Up
The tests themselves followed the format:
Given a blank database
And data in table "mytable"
|field1|field2|field3|
|value1|value2|value3|
When the data is obtained for upload
the output will be valid
With the variation
Given a blank database
And data in table "mytable"
|field1|field2|field3|
|value1|value2|value3|
And the data is obtained for upload
And data in table "mytable" is updated to
|field1|field2|field3|
|value1a|value2a|value3a|
the output will be valid
for an incremental change.
This required that an unique blank database be created for that test (and every test in practice), the data be added to the database, the process of running the stored procedures and transforms be performed and the resulting data for upload be used in the validation step. Creation of a blank database is simple enough, and can be done either by the scripts used by the main system, or as we chose, by creating an Entity Framework code-first model to represent the data model. The first and most obvious problem you will have guessed is when the core data model changes, the EF model will need to be updated. This problem was swept under the carpet as the convention is to not delete or modify anything in the data model, only to extend, but still it is a flaw in the approach taken for the testing.
The second and in my opinion most problematic area of this approach comes from data integrity. The EF model contains the data integrity rules of the core data model and enforces them, however if for example in the test above 'mytable' contained a foreign key constrain from 'field1' to some other table ('mytable2'), this other table would also need populating with some data to maintain data integrity. The data in 'mytable2' is not of interest to the test as it is not extracted by the stored procedures, so any data could be inserted so long as the constraints of the data model are met. To this end a set of code was written to auto-populate any data required for integrity of the model. This involved writing some backbone code to cover the general situation, and one class for each table
(for example
class AlternativeItemHelper : EntityHelper<AlternativeItem>
{
public AlternativeItemHelper(S200DBBuilder dbbuilder)
: base(dbbuilder)
{
PrimaryKeyField = "AlternativeItemID";
ForeignKeys.Add(new ForeignKeySupport.ForeignKeyInfo<AlternativeItem, StockItem> { LocalField = "ItemID", ForeignField = "ItemID", Builder = dbbuilder });
ForeignKeys.Add(new ForeignKeySupport.ForeignKeyInfo<AlternativeItem, StockItem> { LocalField = "ItemAlternativeID", ForeignField = "ItemID", Builder = dbbuilder });
}
}
will create data in table 'StockItem' to satisfy constraints on 'ItemID' and 'ItemAlternativeID' fields of the 'AlternativeItem' table.)that was to be populated as part of a test scenario. As you can imagine, if 'mytable2' contains a similar foreign key relation, then 3 tables need to be populated, and with the real data model, this number grows very large for some tables due to multiple constraints on each table. In one test scenario the addition of one row of data to one table resulted in over 40 tables being populated. This problem was not seen early, as the first half dozen tables populated did not have any such constraints, so the out of the box EF model did not need any additional data to be satisfied.
One advantage of the use of an EF model that I should highlight at this stage is the ability to set default values for all fields, this means that non-nullable fields can be given a value without it being defined in the test scenario (or when auto populating the tables for data integrity reasons alone).
If the data in the linked tables was of interest to the test scenario, the the tables could be populated in the pattern used in the example scenario, and so long as the ordering of the data definition was correct the integrity would be maintained with the defined data.
The third problem was the execution time for the tests. Even the simplest of tests had a minimum execution time of over one minute. This is dominated by the database creation step. In itself this is not a show stopper if tests are to be run during quiet time, e.g. overnight, but if developers and testers want to run tests in real time, and multiple tests are of interest, this meant a significant wait for results.
Summary
The biggest problem with the approach taken was the time required to write a seemingly simple test. The addition of data to a single table may require developer time to add in the data integrity rules, a large amount of tester time to defined what data needs to be in each table required by the integrity rules of the model and if the default values are sufficient. Potentially dev time to define the default values for the additional tables. In the end a suite of around 200 test was created which takes over 3 hours to run, but due to a lack of testing resource, full coverage was never achieved and manual testing was decided as the preferred approach by management.
Supporting Code Examples
The example entity helper for auto populating additional tables is derived from a generic class for help with all entity types, this takes to form
public abstract class EntityHelper<T> : IEntityHelper<T>
where T : class, new()
{
protected S200DBBuilder dbbuilder;
protected DbSet<T> entityset;
protected long _id = 0;
protected string PrimaryKeyField { get; set; }
protected Lazy<GetterAndSetter> PkFieldProp;
public Lazy<List<PropertySetter>> PropertySetters { get; protected set; }
public EntityHelper(S200DBBuilder dbbuilder):this()
{
Initialize(dbbuilder);
}
protected EntityHelper()
{
}
public object GetRandomEntity()
{
return GetRandomEntityInternal();
}
protected T GetRandomEntityInternal()
{
T entity = new T();
//need to set all the properties to random values - and cache a way to create them faster
PropertySetters.Value.ForEach(ps => ps.SetRandomValue(entity));
return entity;
}
public virtual void Initialize(S200DBBuilder dbbuilder)
{
this.dbbuilder = dbbuilder;
this.entityset = dbbuilder.s200.Set<T>();
ForeignKeys = new List<IForeignKeyInfo<T>>();
PkFieldProp = new Lazy<GetterAndSetter>(() =>
{
var type = typeof(T);
var prop = type.GetProperty(PrimaryKeyField);
return new GetterAndSetter { Setter = prop.GetSetMethod(true), Getter = prop.GetGetMethod(true) };
});
//initialise the PropertySetters
PropertySetters = new Lazy<List<PropertySetter>>(() =>
{
var list = new List<PropertySetter>();
list.AddRange(typeof(T)
.GetProperties()
.Where(p => !p.Name.Equals("OpLock", StringComparison.OrdinalIgnoreCase))
.Where(p => !(p.GetGetMethod().IsVirtual))
.Select(p => PropertySetterFactory.Get(dbbuilder.s200, p, typeof(T)))
);
return list;
});
}
protected virtual T AddForeignKeys(T ent)
{
UpdatePKIfDuplicate(ent);
ForeignKeys.ForEach(fk => CheckAndAddFK(fk, ent));
return ent;
}
protected void UpdatePKIfDuplicate(T ent)
{
//assumes all keys are longs
var pk = (long)PkFieldProp.Value.Getter.Invoke(ent, new object[] { });
var allData = entityset.AsEnumerable().Concat(entityset.Local);
var X = allData.Count();
while (allData.Where(e => PkFieldProp.Value.Getter.Invoke(e, new object[] { }).Equals(pk)).Count() >0)
{
pk++;
PkFieldProp.Value.Setter.Invoke(ent, new object[] {pk });
}
}
protected T ReplicateForeignKeys(T newent, T oldent)
{
ForeignKeys.ForEach(fk => fk.CopyFromOldEntToNew(oldent, newent));
return newent;
}
public void AddData(IEnumerable<T> enumerable)
{
entityset.AddRange(enumerable.Select(ent => AddForeignKeys(ent)));
}
public void UpdateData(IEnumerable<T> enumerable)
{
foreach (var newent in enumerable)
{
var oldent = GetCorrespondingEntityFromStore(newent);
UpdateEntityWithNewData(oldent, newent);
dbbuilder.s200.Entry(oldent).State = EntityState.Modified;
}
}
protected void UpdateEntityWithNewData(T oldent, T newent)
{
foreach (var prop in typeof(T).GetProperties())
{
//todo - change this line to be a generic check on the prop being a primary key field
if (prop.Name.Equals("SYSCompanyID")) continue;
var newval = prop.GetGetMethod().Invoke(newent, new object[] { });
// Not sure if this is the correct place to do this, will check with Mike W
if (newval != null)
{
var shouldUpdateChecker = UpdateCheckers.Get(prop.PropertyType);
shouldUpdateChecker.Update(newval, oldent, prop.GetSetMethod());
}
}
}
public void Delete(T entity)
{
var storeentity = GetCorrespondingEntityFromStore(entity);
DeleteEntity(entity);
}
private void DeleteEntity(T entity)
{
entityset.Remove(entity);
}
public void Delete(long id)
{
var entity = GetById(id);
DeleteEntity(entity);
}
public void DeleteAll()
{
var all = entityset.ToList();
entityset.RemoveRange(all);
}
public long AddSingle(T entity)
{
var id = Interlocked.Increment(ref _id);
SetId(entity, id);
AddData(new[] { entity });
return id;
}
protected void SetId(T entity, object id) { PkFieldProp.Value.Setter.Invoke(entity, new[] { id }); }
protected T GetCorrespondingEntityFromStore(T newent) { return GetById(PkFieldProp.Value.Getter.Invoke(newent, new object[] { })); }
protected T GetById(object id) { return entityset.AsEnumerable().Single(ent => PkFieldProp.Value.Getter.Invoke(ent, new object[] { }).Equals(id)); }
public void UpdateAllEntities(Action<T> act)
{
entityset.ToList().ForEach(act);
}
public void UpdateEntity(int id, Action<T> act)
{
var entity = GetById(id);
act(entity);
}
public IEnumerable GetDataFromTable(Table table)
{
return table.CreateSet<T>();
}
public void AddData(IEnumerable enumerable)
{
var data = enumerable.Cast<T>();
AddData(data);
}
public void UpdateData(IEnumerable enumerable)
{
var data = enumerable.Cast<T>();
UpdateData(data);
}
protected List<IForeignKeyInfo<T>> ForeignKeys { get; set; }
protected void CheckAndAddFK(IForeignKeyInfo<T> fk, T ent)
{
//first get the value on the entitity and check if it exists in the model already
fk.CreateDefaultFKEntityAndSetRelation(ent);
}
public void CreateDefaultEntity(out long fkID)
{
var entity = new T();
fkID = AddSingle(entity);
}
public void CreateDefaultEntityWithID(long fkID)
{
var entity = new T();
SetId(entity, fkID);
AddData(new[] { entity });
//the _id field needs to be greater than the id used here, so
fkID++;
if (fkID >= _id)
Interlocked.Exchange(ref _id, fkID);
}
To create default values for any entity we created a partial class to extend the entity class that has one method:
public partial class BinItem
{
partial void OnCreating()
{
DateTimeCreated = DateTime.Now;
BinName = string.Empty;
SpareText1 = string.Empty;
SpareText2 = string.Empty;
SpareText3 = string.Empty;
}
}
this method being called as part of the constructor of the entity class [Table("BinItem")]
public partial class BinItem
{
public BinItem()
{
...
OnCreating();
}
partial void OnCreating();
...
}
The entity helpers rely upon foreign key information to know about the constraints on the table, these are supported by a class
public class ForeignKeyInfo<T, T2> : IForeignKeyInfo<T>
where T : class,new()
where T2 : class,new()
{
public ForeignKeyInfo()
{
BuildIfNotExists = true;
LocalFieldSetter = new Lazy<MethodInfo>(() =>
{
var type = typeof(T);
var prop = type.GetProperty(LocalField);
if (prop == null)
prop = type.GetProperty(LocalField+"s");
return prop.GetSetMethod(true);
});
LocalFieldGetter = new Lazy<MethodInfo>(() =>
{
var type = typeof(T);
var prop = type.GetProperty(LocalField);
if (prop == null)
prop = type.GetProperty(LocalField + "s");
return prop.GetGetMethod(true);
});
ForeignFieldGetter = new Lazy<MethodInfo>(() =>
{
var type = typeof(T2);
var prop = type.GetProperty(ForeignField);
if (prop == null)
prop = type.GetProperty(ForeignField + "s");
return prop.GetGetMethod(true);
});
ForeignTableGetter = new Lazy<MethodInfo>(()=>
{
var type = typeof(S200DataContext);
var prop = type.GetProperty(typeof(T2).Name);
if (prop == null)
{
prop = type.GetProperty(typeof(T2).Name+"s");
if (prop == null && typeof(T2).Name.EndsWith("y"))
{
var currentName = typeof(T2).Name;;
prop = type.GetProperty(currentName.Substring(0,currentName.Length-1) + "ies");
}
if (prop == null && typeof(T2).Name.EndsWith("eau"))
{
prop = type.GetProperty(typeof(T2).Name + "x");
}
if (prop == null && typeof(T2).Name.EndsWith("s"))
{
prop = type.GetProperty(typeof(T2).Name + "es");
}
}
var getter = prop.GetGetMethod(true);
return getter;
});
}
public string LocalField { get; set; }
public S200DBBuilder Builder { get; set; }
public string ForeignField { get; set; }
public bool DoesFKExist(T ent)
{
//check the foeign table to see is an entry exists which matches the ent
var lf = LocalFieldGetter.Value.Invoke(ent, new object[] { });
return GetForeignEnts(lf).Count()> 0;
}
public void CreateDefaultFKEntityAndSetRelation(T ent)
{
if (DoesFKExist(ent))
{
return;
}
var lf = LocalFieldGetter.Value.Invoke(ent, new object[] { });
if (lf == null)
{
if (BuildIfNotExists)
{
//the test did not define the FK ID to use, so just default it to the next in the sequence
long fkID = 0;
Builder.WithDefaultEntity(typeof(T2), out fkID);
//now set the FK relation
LocalFieldSetter.Value.Invoke(ent, new object[] { fkID });
}
}
else
{
//create the FK entity using the id that has been passed in
Builder.WithDefaultEntityWithID(typeof(T2), (long)lf);
}
}
private T2 GetForieignEnt(object fkID)
{
return GetForeignEnts(fkID).FirstOrDefault();
}
private IEnumerable<T2> GetForeignEnts(object fkID)
{
var castData = (DbSet<T2>)(ForeignTableGetter.Value.Invoke(Builder.s200, new object[] { }));
var allData = castData.AsEnumerable().Concat(castData.Local);
var fes = allData.Where(fe => ForeignFieldGetter.Value.Invoke(fe, new object[] { }).Equals(fkID));
return fes;
}
private Lazy<MethodInfo> LocalFieldSetter;
private Lazy<MethodInfo> LocalFieldGetter;
private Lazy<MethodInfo> ForeignFieldGetter;
private Lazy<MethodInfo> ForeignTableGetter;
public T CopyFromOldEntToNew(T oldent, T newent)
{
if (DoesFKExist(newent))
{
return newent;
}
var value = LocalFieldGetter.Value.Invoke(oldent, new object[] { });
LocalFieldSetter.Value.Invoke(newent, new object[] { value });
return newent;
}
public bool BuildIfNotExists { get; set; }
}
No comments:
Post a Comment